Test Report: KVM_Linux_crio 19649

                    
                      32fce3c1cb58db02ee1cd4b36165a584c8a30f83:2024-09-16:36244
                    
                

Test fail (10/213)

x
+
TestAddons/Setup (2400.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-529439 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-529439 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.950420966s)

                                                
                                                
-- stdout --
	* [addons-529439] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-529439" primary control-plane node in "addons-529439" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529439 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-529439 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: helm-tiller, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 17:24:58.887624  379161 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:24:58.887753  379161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:24:58.887768  379161 out.go:358] Setting ErrFile to fd 2...
	I0916 17:24:58.887773  379161 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:24:58.887997  379161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 17:24:58.888691  379161 out.go:352] Setting JSON to false
	I0916 17:24:58.889764  379161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4042,"bootTime":1726503457,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:24:58.889919  379161 start.go:139] virtualization: kvm guest
	I0916 17:24:58.892093  379161 out.go:177] * [addons-529439] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:24:58.893644  379161 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 17:24:58.893692  379161 notify.go:220] Checking for updates...
	I0916 17:24:58.896606  379161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:24:58.898695  379161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 17:24:58.900554  379161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 17:24:58.902105  379161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 17:24:58.903594  379161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 17:24:58.905348  379161 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:24:58.940059  379161 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 17:24:58.941524  379161 start.go:297] selected driver: kvm2
	I0916 17:24:58.941549  379161 start.go:901] validating driver "kvm2" against <nil>
	I0916 17:24:58.941565  379161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 17:24:58.942373  379161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:24:58.942497  379161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 17:24:58.958798  379161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 17:24:58.958887  379161 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:24:58.959165  379161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 17:24:58.959207  379161 cni.go:84] Creating CNI manager for ""
	I0916 17:24:58.959253  379161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 17:24:58.959282  379161 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:24:58.959396  379161 start.go:340] cluster config:
	{Name:addons-529439 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-529439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:24:58.959536  379161 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:24:58.961604  379161 out.go:177] * Starting "addons-529439" primary control-plane node in "addons-529439" cluster
	I0916 17:24:58.963021  379161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 17:24:58.963075  379161 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 17:24:58.963091  379161 cache.go:56] Caching tarball of preloaded images
	I0916 17:24:58.963218  379161 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 17:24:58.963232  379161 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 17:24:58.963615  379161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/config.json ...
	I0916 17:24:58.963653  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/config.json: {Name:mk217a7ada888ad030ee04baca6b0e3f23ab53ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:24:58.963828  379161 start.go:360] acquireMachinesLock for addons-529439: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 17:24:58.963902  379161 start.go:364] duration metric: took 48.784µs to acquireMachinesLock for "addons-529439"
	I0916 17:24:58.963929  379161 start.go:93] Provisioning new machine with config: &{Name:addons-529439 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-529439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 17:24:58.963999  379161 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 17:24:58.965892  379161 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0916 17:24:58.966072  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:24:58.966129  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:24:58.981571  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
	I0916 17:24:58.982126  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:24:58.982877  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:24:58.982913  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:24:58.983282  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:24:58.983618  379161 main.go:141] libmachine: (addons-529439) Calling .GetMachineName
	I0916 17:24:58.983761  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:24:58.983918  379161 start.go:159] libmachine.API.Create for "addons-529439" (driver="kvm2")
	I0916 17:24:58.983959  379161 client.go:168] LocalClient.Create starting
	I0916 17:24:58.984000  379161 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 17:24:59.200880  379161 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 17:24:59.445105  379161 main.go:141] libmachine: Running pre-create checks...
	I0916 17:24:59.445135  379161 main.go:141] libmachine: (addons-529439) Calling .PreCreateCheck
	I0916 17:24:59.445764  379161 main.go:141] libmachine: (addons-529439) Calling .GetConfigRaw
	I0916 17:24:59.446256  379161 main.go:141] libmachine: Creating machine...
	I0916 17:24:59.446273  379161 main.go:141] libmachine: (addons-529439) Calling .Create
	I0916 17:24:59.446408  379161 main.go:141] libmachine: (addons-529439) Creating KVM machine...
	I0916 17:24:59.447739  379161 main.go:141] libmachine: (addons-529439) DBG | found existing default KVM network
	I0916 17:24:59.448509  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:24:59.448345  379182 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014fc0}
	I0916 17:24:59.448619  379161 main.go:141] libmachine: (addons-529439) DBG | created network xml: 
	I0916 17:24:59.448648  379161 main.go:141] libmachine: (addons-529439) DBG | <network>
	I0916 17:24:59.448660  379161 main.go:141] libmachine: (addons-529439) DBG |   <name>mk-addons-529439</name>
	I0916 17:24:59.448673  379161 main.go:141] libmachine: (addons-529439) DBG |   <dns enable='no'/>
	I0916 17:24:59.448710  379161 main.go:141] libmachine: (addons-529439) DBG |   
	I0916 17:24:59.448737  379161 main.go:141] libmachine: (addons-529439) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 17:24:59.448747  379161 main.go:141] libmachine: (addons-529439) DBG |     <dhcp>
	I0916 17:24:59.448765  379161 main.go:141] libmachine: (addons-529439) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 17:24:59.448776  379161 main.go:141] libmachine: (addons-529439) DBG |     </dhcp>
	I0916 17:24:59.448782  379161 main.go:141] libmachine: (addons-529439) DBG |   </ip>
	I0916 17:24:59.448790  379161 main.go:141] libmachine: (addons-529439) DBG |   
	I0916 17:24:59.448797  379161 main.go:141] libmachine: (addons-529439) DBG | </network>
	I0916 17:24:59.448808  379161 main.go:141] libmachine: (addons-529439) DBG | 
	I0916 17:24:59.454510  379161 main.go:141] libmachine: (addons-529439) DBG | trying to create private KVM network mk-addons-529439 192.168.39.0/24...
	I0916 17:24:59.520260  379161 main.go:141] libmachine: (addons-529439) DBG | private KVM network mk-addons-529439 192.168.39.0/24 created
	I0916 17:24:59.520307  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:24:59.520229  379182 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 17:24:59.520342  379161 main.go:141] libmachine: (addons-529439) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439 ...
	I0916 17:24:59.520382  379161 main.go:141] libmachine: (addons-529439) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 17:24:59.520409  379161 main.go:141] libmachine: (addons-529439) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 17:24:59.802238  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:24:59.802101  379182 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa...
	I0916 17:24:59.948669  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:24:59.948499  379182 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/addons-529439.rawdisk...
	I0916 17:24:59.948707  379161 main.go:141] libmachine: (addons-529439) DBG | Writing magic tar header
	I0916 17:24:59.948723  379161 main.go:141] libmachine: (addons-529439) DBG | Writing SSH key tar header
	I0916 17:24:59.949399  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:24:59.949288  379182 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439 ...
	I0916 17:24:59.949482  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439
	I0916 17:24:59.949524  379161 main.go:141] libmachine: (addons-529439) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439 (perms=drwx------)
	I0916 17:24:59.949540  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 17:24:59.949558  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 17:24:59.949567  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 17:24:59.949576  379161 main.go:141] libmachine: (addons-529439) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 17:24:59.949593  379161 main.go:141] libmachine: (addons-529439) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 17:24:59.949605  379161 main.go:141] libmachine: (addons-529439) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 17:24:59.949619  379161 main.go:141] libmachine: (addons-529439) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 17:24:59.949631  379161 main.go:141] libmachine: (addons-529439) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 17:24:59.949642  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 17:24:59.949649  379161 main.go:141] libmachine: (addons-529439) Creating domain...
	I0916 17:24:59.949663  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home/jenkins
	I0916 17:24:59.949673  379161 main.go:141] libmachine: (addons-529439) DBG | Checking permissions on dir: /home
	I0916 17:24:59.949685  379161 main.go:141] libmachine: (addons-529439) DBG | Skipping /home - not owner
	I0916 17:24:59.950934  379161 main.go:141] libmachine: (addons-529439) define libvirt domain using xml: 
	I0916 17:24:59.950964  379161 main.go:141] libmachine: (addons-529439) <domain type='kvm'>
	I0916 17:24:59.950972  379161 main.go:141] libmachine: (addons-529439)   <name>addons-529439</name>
	I0916 17:24:59.950977  379161 main.go:141] libmachine: (addons-529439)   <memory unit='MiB'>4000</memory>
	I0916 17:24:59.950986  379161 main.go:141] libmachine: (addons-529439)   <vcpu>2</vcpu>
	I0916 17:24:59.950990  379161 main.go:141] libmachine: (addons-529439)   <features>
	I0916 17:24:59.950995  379161 main.go:141] libmachine: (addons-529439)     <acpi/>
	I0916 17:24:59.950999  379161 main.go:141] libmachine: (addons-529439)     <apic/>
	I0916 17:24:59.951006  379161 main.go:141] libmachine: (addons-529439)     <pae/>
	I0916 17:24:59.951011  379161 main.go:141] libmachine: (addons-529439)     
	I0916 17:24:59.951019  379161 main.go:141] libmachine: (addons-529439)   </features>
	I0916 17:24:59.951026  379161 main.go:141] libmachine: (addons-529439)   <cpu mode='host-passthrough'>
	I0916 17:24:59.951037  379161 main.go:141] libmachine: (addons-529439)   
	I0916 17:24:59.951051  379161 main.go:141] libmachine: (addons-529439)   </cpu>
	I0916 17:24:59.951060  379161 main.go:141] libmachine: (addons-529439)   <os>
	I0916 17:24:59.951078  379161 main.go:141] libmachine: (addons-529439)     <type>hvm</type>
	I0916 17:24:59.951110  379161 main.go:141] libmachine: (addons-529439)     <boot dev='cdrom'/>
	I0916 17:24:59.951127  379161 main.go:141] libmachine: (addons-529439)     <boot dev='hd'/>
	I0916 17:24:59.951134  379161 main.go:141] libmachine: (addons-529439)     <bootmenu enable='no'/>
	I0916 17:24:59.951138  379161 main.go:141] libmachine: (addons-529439)   </os>
	I0916 17:24:59.951143  379161 main.go:141] libmachine: (addons-529439)   <devices>
	I0916 17:24:59.951148  379161 main.go:141] libmachine: (addons-529439)     <disk type='file' device='cdrom'>
	I0916 17:24:59.951157  379161 main.go:141] libmachine: (addons-529439)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/boot2docker.iso'/>
	I0916 17:24:59.951164  379161 main.go:141] libmachine: (addons-529439)       <target dev='hdc' bus='scsi'/>
	I0916 17:24:59.951170  379161 main.go:141] libmachine: (addons-529439)       <readonly/>
	I0916 17:24:59.951174  379161 main.go:141] libmachine: (addons-529439)     </disk>
	I0916 17:24:59.951180  379161 main.go:141] libmachine: (addons-529439)     <disk type='file' device='disk'>
	I0916 17:24:59.951188  379161 main.go:141] libmachine: (addons-529439)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 17:24:59.951196  379161 main.go:141] libmachine: (addons-529439)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/addons-529439.rawdisk'/>
	I0916 17:24:59.951203  379161 main.go:141] libmachine: (addons-529439)       <target dev='hda' bus='virtio'/>
	I0916 17:24:59.951211  379161 main.go:141] libmachine: (addons-529439)     </disk>
	I0916 17:24:59.951217  379161 main.go:141] libmachine: (addons-529439)     <interface type='network'>
	I0916 17:24:59.951224  379161 main.go:141] libmachine: (addons-529439)       <source network='mk-addons-529439'/>
	I0916 17:24:59.951233  379161 main.go:141] libmachine: (addons-529439)       <model type='virtio'/>
	I0916 17:24:59.951239  379161 main.go:141] libmachine: (addons-529439)     </interface>
	I0916 17:24:59.951243  379161 main.go:141] libmachine: (addons-529439)     <interface type='network'>
	I0916 17:24:59.951248  379161 main.go:141] libmachine: (addons-529439)       <source network='default'/>
	I0916 17:24:59.951255  379161 main.go:141] libmachine: (addons-529439)       <model type='virtio'/>
	I0916 17:24:59.951260  379161 main.go:141] libmachine: (addons-529439)     </interface>
	I0916 17:24:59.951266  379161 main.go:141] libmachine: (addons-529439)     <serial type='pty'>
	I0916 17:24:59.951271  379161 main.go:141] libmachine: (addons-529439)       <target port='0'/>
	I0916 17:24:59.951277  379161 main.go:141] libmachine: (addons-529439)     </serial>
	I0916 17:24:59.951288  379161 main.go:141] libmachine: (addons-529439)     <console type='pty'>
	I0916 17:24:59.951295  379161 main.go:141] libmachine: (addons-529439)       <target type='serial' port='0'/>
	I0916 17:24:59.951301  379161 main.go:141] libmachine: (addons-529439)     </console>
	I0916 17:24:59.951307  379161 main.go:141] libmachine: (addons-529439)     <rng model='virtio'>
	I0916 17:24:59.951313  379161 main.go:141] libmachine: (addons-529439)       <backend model='random'>/dev/random</backend>
	I0916 17:24:59.951319  379161 main.go:141] libmachine: (addons-529439)     </rng>
	I0916 17:24:59.951324  379161 main.go:141] libmachine: (addons-529439)     
	I0916 17:24:59.951333  379161 main.go:141] libmachine: (addons-529439)     
	I0916 17:24:59.951340  379161 main.go:141] libmachine: (addons-529439)   </devices>
	I0916 17:24:59.951344  379161 main.go:141] libmachine: (addons-529439) </domain>
	I0916 17:24:59.951355  379161 main.go:141] libmachine: (addons-529439) 
	I0916 17:24:59.957774  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:46:78:b9 in network default
	I0916 17:24:59.958343  379161 main.go:141] libmachine: (addons-529439) Ensuring networks are active...
	I0916 17:24:59.958363  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:24:59.959052  379161 main.go:141] libmachine: (addons-529439) Ensuring network default is active
	I0916 17:24:59.959386  379161 main.go:141] libmachine: (addons-529439) Ensuring network mk-addons-529439 is active
	I0916 17:24:59.959855  379161 main.go:141] libmachine: (addons-529439) Getting domain xml...
	I0916 17:24:59.960485  379161 main.go:141] libmachine: (addons-529439) Creating domain...
	I0916 17:25:01.412952  379161 main.go:141] libmachine: (addons-529439) Waiting to get IP...
	I0916 17:25:01.413837  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:01.414232  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:01.414279  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:01.414226  379182 retry.go:31] will retry after 242.667989ms: waiting for machine to come up
	I0916 17:25:01.658861  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:01.659324  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:01.659352  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:01.659274  379182 retry.go:31] will retry after 345.496133ms: waiting for machine to come up
	I0916 17:25:02.006872  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:02.007404  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:02.007433  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:02.007374  379182 retry.go:31] will retry after 296.493013ms: waiting for machine to come up
	I0916 17:25:02.305976  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:02.306400  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:02.306434  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:02.306335  379182 retry.go:31] will retry after 372.204027ms: waiting for machine to come up
	I0916 17:25:02.680037  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:02.680464  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:02.680494  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:02.680400  379182 retry.go:31] will retry after 486.12886ms: waiting for machine to come up
	I0916 17:25:03.167953  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:03.168355  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:03.168389  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:03.168315  379182 retry.go:31] will retry after 595.445178ms: waiting for machine to come up
	I0916 17:25:03.765222  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:03.765616  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:03.765653  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:03.765571  379182 retry.go:31] will retry after 883.818614ms: waiting for machine to come up
	I0916 17:25:04.651192  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:04.651625  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:04.651655  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:04.651563  379182 retry.go:31] will retry after 985.710645ms: waiting for machine to come up
	I0916 17:25:05.638916  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:05.639343  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:05.639380  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:05.639271  379182 retry.go:31] will retry after 1.183381292s: waiting for machine to come up
	I0916 17:25:06.824622  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:06.825130  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:06.825158  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:06.825050  379182 retry.go:31] will retry after 1.429816707s: waiting for machine to come up
	I0916 17:25:08.256802  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:08.257290  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:08.257323  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:08.257221  379182 retry.go:31] will retry after 1.817742103s: waiting for machine to come up
	I0916 17:25:10.077461  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:10.077959  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:10.077988  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:10.077894  379182 retry.go:31] will retry after 3.585734766s: waiting for machine to come up
	I0916 17:25:13.667959  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:13.668511  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:13.668538  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:13.668334  379182 retry.go:31] will retry after 2.898984481s: waiting for machine to come up
	I0916 17:25:16.570069  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:16.570513  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find current IP address of domain addons-529439 in network mk-addons-529439
	I0916 17:25:16.570537  379161 main.go:141] libmachine: (addons-529439) DBG | I0916 17:25:16.570464  379182 retry.go:31] will retry after 4.40092801s: waiting for machine to come up
	I0916 17:25:20.974253  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:20.974678  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has current primary IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:20.974699  379161 main.go:141] libmachine: (addons-529439) Found IP for machine: 192.168.39.32
	I0916 17:25:20.974708  379161 main.go:141] libmachine: (addons-529439) Reserving static IP address...
	I0916 17:25:20.975074  379161 main.go:141] libmachine: (addons-529439) DBG | unable to find host DHCP lease matching {name: "addons-529439", mac: "52:54:00:7f:04:15", ip: "192.168.39.32"} in network mk-addons-529439
	I0916 17:25:21.060756  379161 main.go:141] libmachine: (addons-529439) Reserved static IP address: 192.168.39.32
	I0916 17:25:21.060790  379161 main.go:141] libmachine: (addons-529439) DBG | Getting to WaitForSSH function...
	I0916 17:25:21.060800  379161 main.go:141] libmachine: (addons-529439) Waiting for SSH to be available...
	I0916 17:25:21.063349  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.063949  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.063984  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.064121  379161 main.go:141] libmachine: (addons-529439) DBG | Using SSH client type: external
	I0916 17:25:21.064152  379161 main.go:141] libmachine: (addons-529439) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa (-rw-------)
	I0916 17:25:21.064186  379161 main.go:141] libmachine: (addons-529439) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.32 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 17:25:21.064205  379161 main.go:141] libmachine: (addons-529439) DBG | About to run SSH command:
	I0916 17:25:21.064219  379161 main.go:141] libmachine: (addons-529439) DBG | exit 0
	I0916 17:25:21.201500  379161 main.go:141] libmachine: (addons-529439) DBG | SSH cmd err, output: <nil>: 
	I0916 17:25:21.201757  379161 main.go:141] libmachine: (addons-529439) KVM machine creation complete!
	I0916 17:25:21.202128  379161 main.go:141] libmachine: (addons-529439) Calling .GetConfigRaw
	I0916 17:25:21.202802  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:21.203038  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:21.203284  379161 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 17:25:21.203345  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:21.204818  379161 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 17:25:21.204830  379161 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 17:25:21.204836  379161 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 17:25:21.204843  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.207318  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.207718  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.207740  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.207886  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:21.208108  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.208273  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.208401  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:21.208520  379161 main.go:141] libmachine: Using SSH client type: native
	I0916 17:25:21.208736  379161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 17:25:21.208749  379161 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 17:25:21.316604  379161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 17:25:21.316637  379161 main.go:141] libmachine: Detecting the provisioner...
	I0916 17:25:21.316648  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.319821  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.320164  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.320213  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.320414  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:21.320616  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.320789  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.320927  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:21.321117  379161 main.go:141] libmachine: Using SSH client type: native
	I0916 17:25:21.321295  379161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 17:25:21.321306  379161 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 17:25:21.430214  379161 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 17:25:21.430326  379161 main.go:141] libmachine: found compatible host: buildroot
	I0916 17:25:21.430339  379161 main.go:141] libmachine: Provisioning with buildroot...
	I0916 17:25:21.430348  379161 main.go:141] libmachine: (addons-529439) Calling .GetMachineName
	I0916 17:25:21.430695  379161 buildroot.go:166] provisioning hostname "addons-529439"
	I0916 17:25:21.430729  379161 main.go:141] libmachine: (addons-529439) Calling .GetMachineName
	I0916 17:25:21.430968  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.433679  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.434083  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.434140  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.434262  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:21.434495  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.434649  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.434803  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:21.434951  379161 main.go:141] libmachine: Using SSH client type: native
	I0916 17:25:21.435124  379161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 17:25:21.435134  379161 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-529439 && echo "addons-529439" | sudo tee /etc/hostname
	I0916 17:25:21.557070  379161 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-529439
	
	I0916 17:25:21.557107  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.560104  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.560483  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.560517  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.560715  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:21.560948  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.561122  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.561258  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:21.561482  379161 main.go:141] libmachine: Using SSH client type: native
	I0916 17:25:21.561656  379161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 17:25:21.561670  379161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-529439' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-529439/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-529439' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 17:25:21.678578  379161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 17:25:21.678634  379161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 17:25:21.678702  379161 buildroot.go:174] setting up certificates
	I0916 17:25:21.678722  379161 provision.go:84] configureAuth start
	I0916 17:25:21.678744  379161 main.go:141] libmachine: (addons-529439) Calling .GetMachineName
	I0916 17:25:21.679074  379161 main.go:141] libmachine: (addons-529439) Calling .GetIP
	I0916 17:25:21.682145  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.682695  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.682725  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.682849  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.685141  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.685500  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.685531  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.685740  379161 provision.go:143] copyHostCerts
	I0916 17:25:21.685828  379161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 17:25:21.685973  379161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 17:25:21.686039  379161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 17:25:21.686090  379161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.addons-529439 san=[127.0.0.1 192.168.39.32 addons-529439 localhost minikube]
	I0916 17:25:21.782860  379161 provision.go:177] copyRemoteCerts
	I0916 17:25:21.782941  379161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 17:25:21.782972  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.785391  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.785803  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.785838  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.785978  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:21.786174  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.786319  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:21.786444  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:21.871965  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 17:25:21.899515  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 17:25:21.926353  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 17:25:21.953659  379161 provision.go:87] duration metric: took 274.914329ms to configureAuth
	I0916 17:25:21.953699  379161 buildroot.go:189] setting minikube options for container-runtime
	I0916 17:25:21.953883  379161 config.go:182] Loaded profile config "addons-529439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 17:25:21.953973  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:21.957537  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.957907  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:21.957962  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:21.958205  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:21.958423  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.958592  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:21.958735  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:21.958985  379161 main.go:141] libmachine: Using SSH client type: native
	I0916 17:25:21.959173  379161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 17:25:21.959190  379161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 17:25:22.192853  379161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 17:25:22.192887  379161 main.go:141] libmachine: Checking connection to Docker...
	I0916 17:25:22.192897  379161 main.go:141] libmachine: (addons-529439) Calling .GetURL
	I0916 17:25:22.194303  379161 main.go:141] libmachine: (addons-529439) DBG | Using libvirt version 6000000
	I0916 17:25:22.196475  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.196850  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.196874  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.197125  379161 main.go:141] libmachine: Docker is up and running!
	I0916 17:25:22.197139  379161 main.go:141] libmachine: Reticulating splines...
	I0916 17:25:22.197147  379161 client.go:171] duration metric: took 23.213177686s to LocalClient.Create
	I0916 17:25:22.197170  379161 start.go:167] duration metric: took 23.213254345s to libmachine.API.Create "addons-529439"
	I0916 17:25:22.197189  379161 start.go:293] postStartSetup for "addons-529439" (driver="kvm2")
	I0916 17:25:22.197211  379161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 17:25:22.197240  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:22.197579  379161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 17:25:22.197610  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:22.200055  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.200439  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.200464  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.200653  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:22.200835  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:22.201010  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:22.201157  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:22.288701  379161 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 17:25:22.293762  379161 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 17:25:22.293801  379161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 17:25:22.293896  379161 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 17:25:22.293929  379161 start.go:296] duration metric: took 96.731569ms for postStartSetup
	I0916 17:25:22.293971  379161 main.go:141] libmachine: (addons-529439) Calling .GetConfigRaw
	I0916 17:25:22.294697  379161 main.go:141] libmachine: (addons-529439) Calling .GetIP
	I0916 17:25:22.297590  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.298042  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.298069  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.298329  379161 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/config.json ...
	I0916 17:25:22.298560  379161 start.go:128] duration metric: took 23.334547193s to createHost
	I0916 17:25:22.298589  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:22.301302  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.301583  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.301621  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.301768  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:22.301982  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:22.302138  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:22.302301  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:22.302519  379161 main.go:141] libmachine: Using SSH client type: native
	I0916 17:25:22.302714  379161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I0916 17:25:22.302725  379161 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 17:25:22.410181  379161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726507522.381628703
	
	I0916 17:25:22.410213  379161 fix.go:216] guest clock: 1726507522.381628703
	I0916 17:25:22.410224  379161 fix.go:229] Guest: 2024-09-16 17:25:22.381628703 +0000 UTC Remote: 2024-09-16 17:25:22.298572954 +0000 UTC m=+23.448346058 (delta=83.055749ms)
	I0916 17:25:22.410257  379161 fix.go:200] guest clock delta is within tolerance: 83.055749ms
	I0916 17:25:22.410265  379161 start.go:83] releasing machines lock for "addons-529439", held for 23.446350762s
	I0916 17:25:22.410300  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:22.410681  379161 main.go:141] libmachine: (addons-529439) Calling .GetIP
	I0916 17:25:22.413585  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.414028  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.414059  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.414271  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:22.414895  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:22.415170  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:22.415301  379161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 17:25:22.415375  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:22.415503  379161 ssh_runner.go:195] Run: cat /version.json
	I0916 17:25:22.415529  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:22.417995  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.418285  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.418322  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.418336  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.418506  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:22.418715  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:22.418751  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:22.418775  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:22.418887  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:22.418976  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:22.419045  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:22.419158  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:22.419279  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:22.419417  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:22.525410  379161 ssh_runner.go:195] Run: systemctl --version
	I0916 17:25:22.531772  379161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 17:25:22.691313  379161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 17:25:22.698344  379161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 17:25:22.698435  379161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 17:25:22.715516  379161 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 17:25:22.715550  379161 start.go:495] detecting cgroup driver to use...
	I0916 17:25:22.715622  379161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 17:25:22.733231  379161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 17:25:22.748520  379161 docker.go:217] disabling cri-docker service (if available) ...
	I0916 17:25:22.748601  379161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 17:25:22.763231  379161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 17:25:22.779091  379161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 17:25:22.906771  379161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 17:25:23.080131  379161 docker.go:233] disabling docker service ...
	I0916 17:25:23.080205  379161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 17:25:23.096026  379161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 17:25:23.110705  379161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 17:25:23.230565  379161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 17:25:23.354696  379161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 17:25:23.370060  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 17:25:23.389515  379161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 17:25:23.389580  379161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.401174  379161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 17:25:23.401241  379161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.412587  379161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.424310  379161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.436367  379161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 17:25:23.448526  379161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.460193  379161 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.478352  379161 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 17:25:23.490225  379161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 17:25:23.501059  379161 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 17:25:23.501129  379161 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 17:25:23.516095  379161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 17:25:23.527397  379161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:25:23.645386  379161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 17:25:23.739893  379161 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 17:25:23.740011  379161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 17:25:23.745036  379161 start.go:563] Will wait 60s for crictl version
	I0916 17:25:23.745137  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:25:23.749381  379161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 17:25:23.791103  379161 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 17:25:23.791221  379161 ssh_runner.go:195] Run: crio --version
	I0916 17:25:23.820933  379161 ssh_runner.go:195] Run: crio --version
	I0916 17:25:23.852249  379161 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 17:25:23.853849  379161 main.go:141] libmachine: (addons-529439) Calling .GetIP
	I0916 17:25:23.856591  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:23.856991  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:23.857028  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:23.857247  379161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 17:25:23.861804  379161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 17:25:23.875475  379161 kubeadm.go:883] updating cluster {Name:addons-529439 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-529439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 17:25:23.875599  379161 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 17:25:23.875646  379161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 17:25:23.908547  379161 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 17:25:23.908640  379161 ssh_runner.go:195] Run: which lz4
	I0916 17:25:23.912712  379161 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 17:25:23.917079  379161 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 17:25:23.917117  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 17:25:25.266977  379161 crio.go:462] duration metric: took 1.354297313s to copy over tarball
	I0916 17:25:25.267056  379161 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 17:25:27.455514  379161 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.188403955s)
	I0916 17:25:27.455547  379161 crio.go:469] duration metric: took 2.188534832s to extract the tarball
	I0916 17:25:27.455559  379161 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 17:25:27.493178  379161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 17:25:27.536854  379161 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 17:25:27.536884  379161 cache_images.go:84] Images are preloaded, skipping loading
	I0916 17:25:27.536896  379161 kubeadm.go:934] updating node { 192.168.39.32 8443 v1.31.1 crio true true} ...
	I0916 17:25:27.537054  379161 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-529439 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-529439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 17:25:27.537143  379161 ssh_runner.go:195] Run: crio config
	I0916 17:25:27.588371  379161 cni.go:84] Creating CNI manager for ""
	I0916 17:25:27.588397  379161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 17:25:27.588412  379161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 17:25:27.588442  379161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-529439 NodeName:addons-529439 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 17:25:27.588612  379161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-529439"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 17:25:27.588698  379161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 17:25:27.599007  379161 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 17:25:27.599105  379161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 17:25:27.609341  379161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 17:25:27.626634  379161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 17:25:27.644302  379161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0916 17:25:27.661846  379161 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I0916 17:25:27.666005  379161 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.32	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 17:25:27.678838  379161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:25:27.797128  379161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 17:25:27.817717  379161 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439 for IP: 192.168.39.32
	I0916 17:25:27.817752  379161 certs.go:194] generating shared ca certs ...
	I0916 17:25:27.817795  379161 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:27.817997  379161 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 17:25:27.995357  379161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt ...
	I0916 17:25:27.995394  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt: {Name:mk4d181f465c6c6545f2966246d40f98f3d90653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:27.995573  379161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key ...
	I0916 17:25:27.995584  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key: {Name:mk52cf64a164062d6a2fc75dd5f46b39a8ba6069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:27.995740  379161 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 17:25:28.203246  379161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt ...
	I0916 17:25:28.203289  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt: {Name:mk4d20f07d3af207830e42c42d1d5655aac6613d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:28.203485  379161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key ...
	I0916 17:25:28.203497  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key: {Name:mkb7f15f24626cd9dcb40e96a341f18732302371 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:28.203569  379161 certs.go:256] generating profile certs ...
	I0916 17:25:28.203630  379161 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/client.key
	I0916 17:25:28.203663  379161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/client.crt with IP's: []
	I0916 17:25:28.473529  379161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/client.crt ...
	I0916 17:25:28.473565  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/client.crt: {Name:mk9ede8999f9d7c94493b6a04c2acf71f0aa6d4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:28.473734  379161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/client.key ...
	I0916 17:25:28.473746  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/client.key: {Name:mk6e6134f925e9dbbb9835f50ab1bd2c88d4dfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:28.473817  379161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.key.425b15b7
	I0916 17:25:28.473835  379161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.crt.425b15b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.32]
	I0916 17:25:28.769893  379161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.crt.425b15b7 ...
	I0916 17:25:28.769926  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.crt.425b15b7: {Name:mk837316d66ab456f3de6bc5976b39e8d0e72489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:28.770089  379161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.key.425b15b7 ...
	I0916 17:25:28.770103  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.key.425b15b7: {Name:mkca2f7965f46f9a6f365cbe80be33c2f449e477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:28.770169  379161 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.crt.425b15b7 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.crt
	I0916 17:25:28.770244  379161 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.key.425b15b7 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.key
	I0916 17:25:28.770290  379161 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.key
	I0916 17:25:28.770308  379161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.crt with IP's: []
	I0916 17:25:29.007108  379161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.crt ...
	I0916 17:25:29.007150  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.crt: {Name:mk4472ac8f24b87a519c2e310e7be1e6ec41d4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:29.007354  379161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.key ...
	I0916 17:25:29.007367  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.key: {Name:mkfd971c3c40a176841a2a3f726b2572995316df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:29.007543  379161 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 17:25:29.007582  379161 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 17:25:29.007608  379161 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 17:25:29.007637  379161 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 17:25:29.008296  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 17:25:29.037401  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 17:25:29.064034  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 17:25:29.089731  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 17:25:29.115777  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 17:25:29.141538  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 17:25:29.166072  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 17:25:29.190943  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/addons-529439/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 17:25:29.218093  379161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 17:25:29.245136  379161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 17:25:29.263954  379161 ssh_runner.go:195] Run: openssl version
	I0916 17:25:29.270010  379161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 17:25:29.282685  379161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:25:29.287672  379161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:25:29.287744  379161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 17:25:29.294399  379161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 17:25:29.307019  379161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 17:25:29.311874  379161 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 17:25:29.311951  379161 kubeadm.go:392] StartCluster: {Name:addons-529439 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-529439 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:25:29.312066  379161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 17:25:29.312132  379161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 17:25:29.352444  379161 cri.go:89] found id: ""
	I0916 17:25:29.352516  379161 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 17:25:29.362909  379161 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 17:25:29.373726  379161 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 17:25:29.384386  379161 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 17:25:29.384421  379161 kubeadm.go:157] found existing configuration files:
	
	I0916 17:25:29.384473  379161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 17:25:29.397003  379161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 17:25:29.397079  379161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 17:25:29.408661  379161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 17:25:29.418797  379161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 17:25:29.418872  379161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 17:25:29.432261  379161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 17:25:29.442336  379161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 17:25:29.442431  379161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 17:25:29.452477  379161 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 17:25:29.468274  379161 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 17:25:29.468354  379161 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 17:25:29.478730  379161 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 17:25:29.528784  379161 kubeadm.go:310] W0916 17:25:29.509383     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:25:29.529319  379161 kubeadm.go:310] W0916 17:25:29.510219     820 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 17:25:29.629801  379161 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 17:25:40.215623  379161 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 17:25:40.215717  379161 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 17:25:40.215808  379161 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 17:25:40.215944  379161 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 17:25:40.216061  379161 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 17:25:40.216152  379161 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 17:25:40.217964  379161 out.go:235]   - Generating certificates and keys ...
	I0916 17:25:40.218089  379161 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 17:25:40.218215  379161 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 17:25:40.218341  379161 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 17:25:40.218415  379161 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 17:25:40.218472  379161 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 17:25:40.218515  379161 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 17:25:40.218574  379161 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 17:25:40.218682  379161 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-529439 localhost] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 17:25:40.218733  379161 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 17:25:40.218861  379161 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-529439 localhost] and IPs [192.168.39.32 127.0.0.1 ::1]
	I0916 17:25:40.218953  379161 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 17:25:40.219006  379161 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 17:25:40.219043  379161 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 17:25:40.219096  379161 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 17:25:40.219140  379161 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 17:25:40.219196  379161 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 17:25:40.219241  379161 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 17:25:40.219295  379161 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 17:25:40.219344  379161 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 17:25:40.219477  379161 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 17:25:40.219550  379161 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 17:25:40.221211  379161 out.go:235]   - Booting up control plane ...
	I0916 17:25:40.221320  379161 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 17:25:40.221399  379161 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 17:25:40.221479  379161 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 17:25:40.221597  379161 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 17:25:40.221722  379161 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 17:25:40.221789  379161 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 17:25:40.221936  379161 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 17:25:40.222060  379161 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 17:25:40.222135  379161 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.115533ms
	I0916 17:25:40.222208  379161 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 17:25:40.222259  379161 kubeadm.go:310] [api-check] The API server is healthy after 6.004630643s
	I0916 17:25:40.222378  379161 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 17:25:40.222491  379161 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 17:25:40.222554  379161 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 17:25:40.222743  379161 kubeadm.go:310] [mark-control-plane] Marking the node addons-529439 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 17:25:40.222917  379161 kubeadm.go:310] [bootstrap-token] Using token: 2arhzk.74ve5u2pc50ny44t
	I0916 17:25:40.224449  379161 out.go:235]   - Configuring RBAC rules ...
	I0916 17:25:40.224597  379161 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 17:25:40.224691  379161 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 17:25:40.224871  379161 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 17:25:40.225062  379161 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 17:25:40.225204  379161 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 17:25:40.225290  379161 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 17:25:40.225446  379161 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 17:25:40.225491  379161 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 17:25:40.225535  379161 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 17:25:40.225541  379161 kubeadm.go:310] 
	I0916 17:25:40.225604  379161 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 17:25:40.225610  379161 kubeadm.go:310] 
	I0916 17:25:40.225680  379161 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 17:25:40.225686  379161 kubeadm.go:310] 
	I0916 17:25:40.225707  379161 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 17:25:40.225761  379161 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 17:25:40.225816  379161 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 17:25:40.225827  379161 kubeadm.go:310] 
	I0916 17:25:40.225869  379161 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 17:25:40.225875  379161 kubeadm.go:310] 
	I0916 17:25:40.225936  379161 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 17:25:40.225951  379161 kubeadm.go:310] 
	I0916 17:25:40.225998  379161 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 17:25:40.226062  379161 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 17:25:40.226123  379161 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 17:25:40.226129  379161 kubeadm.go:310] 
	I0916 17:25:40.226199  379161 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 17:25:40.226287  379161 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 17:25:40.226301  379161 kubeadm.go:310] 
	I0916 17:25:40.226370  379161 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2arhzk.74ve5u2pc50ny44t \
	I0916 17:25:40.226454  379161 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e \
	I0916 17:25:40.226480  379161 kubeadm.go:310] 	--control-plane 
	I0916 17:25:40.226486  379161 kubeadm.go:310] 
	I0916 17:25:40.226572  379161 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 17:25:40.226581  379161 kubeadm.go:310] 
	I0916 17:25:40.226674  379161 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2arhzk.74ve5u2pc50ny44t \
	I0916 17:25:40.226784  379161 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e 
	I0916 17:25:40.226797  379161 cni.go:84] Creating CNI manager for ""
	I0916 17:25:40.226804  379161 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 17:25:40.228453  379161 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 17:25:40.229896  379161 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 17:25:40.243845  379161 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 17:25:40.268948  379161 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 17:25:40.269034  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:40.269100  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-529439 minikube.k8s.io/updated_at=2024_09_16T17_25_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=addons-529439 minikube.k8s.io/primary=true
	I0916 17:25:40.292704  379161 ops.go:34] apiserver oom_adj: -16
	I0916 17:25:40.421959  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:40.922808  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:41.422405  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:41.923090  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:42.422134  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:42.922306  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:43.422391  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:43.922306  379161 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 17:25:44.013531  379161 kubeadm.go:1113] duration metric: took 3.744560004s to wait for elevateKubeSystemPrivileges
	I0916 17:25:44.013585  379161 kubeadm.go:394] duration metric: took 14.701642445s to StartCluster
	I0916 17:25:44.013619  379161 settings.go:142] acquiring lock: {Name:mk9af1b5fb868180f97a2648a387fb06c7d5fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:44.013780  379161 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 17:25:44.014228  379161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/kubeconfig: {Name:mk8f19e4e61aad6cdecf3a2028815277e5ffb248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 17:25:44.014422  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 17:25:44.014442  379161 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 17:25:44.014490  379161 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 17:25:44.014595  379161 addons.go:69] Setting yakd=true in profile "addons-529439"
	I0916 17:25:44.014613  379161 addons.go:234] Setting addon yakd=true in "addons-529439"
	I0916 17:25:44.014629  379161 addons.go:69] Setting inspektor-gadget=true in profile "addons-529439"
	I0916 17:25:44.014657  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.014662  379161 addons.go:234] Setting addon inspektor-gadget=true in "addons-529439"
	I0916 17:25:44.014660  379161 config.go:182] Loaded profile config "addons-529439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 17:25:44.014669  379161 addons.go:69] Setting registry=true in profile "addons-529439"
	I0916 17:25:44.014682  379161 addons.go:69] Setting metrics-server=true in profile "addons-529439"
	I0916 17:25:44.014686  379161 addons.go:234] Setting addon registry=true in "addons-529439"
	I0916 17:25:44.014697  379161 addons.go:234] Setting addon metrics-server=true in "addons-529439"
	I0916 17:25:44.014709  379161 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-529439"
	I0916 17:25:44.014712  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.014692  379161 addons.go:69] Setting gcp-auth=true in profile "addons-529439"
	I0916 17:25:44.014712  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.014726  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.014739  379161 mustload.go:65] Loading cluster: addons-529439
	I0916 17:25:44.014747  379161 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-529439"
	I0916 17:25:44.014770  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.014852  379161 addons.go:69] Setting helm-tiller=true in profile "addons-529439"
	I0916 17:25:44.014863  379161 addons.go:234] Setting addon helm-tiller=true in "addons-529439"
	I0916 17:25:44.014883  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.014924  379161 config.go:182] Loaded profile config "addons-529439": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 17:25:44.015110  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015133  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015171  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015178  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.014661  379161 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-529439"
	I0916 17:25:44.015206  379161 addons.go:69] Setting ingress=true in profile "addons-529439"
	I0916 17:25:44.015211  379161 addons.go:69] Setting default-storageclass=true in profile "addons-529439"
	I0916 17:25:44.015211  379161 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-529439"
	I0916 17:25:44.015213  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015219  379161 addons.go:234] Setting addon ingress=true in "addons-529439"
	I0916 17:25:44.015223  379161 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-529439"
	I0916 17:25:44.015225  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015230  379161 addons.go:69] Setting ingress-dns=true in profile "addons-529439"
	I0916 17:25:44.015239  379161 addons.go:234] Setting addon ingress-dns=true in "addons-529439"
	I0916 17:25:44.015246  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015270  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.015280  379161 addons.go:69] Setting cloud-spanner=true in profile "addons-529439"
	I0916 17:25:44.015292  379161 addons.go:234] Setting addon cloud-spanner=true in "addons-529439"
	I0916 17:25:44.015294  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015341  379161 addons.go:69] Setting volcano=true in profile "addons-529439"
	I0916 17:25:44.015354  379161 addons.go:234] Setting addon volcano=true in "addons-529439"
	I0916 17:25:44.015466  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015480  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015503  379161 addons.go:69] Setting volumesnapshots=true in profile "addons-529439"
	I0916 17:25:44.015516  379161 addons.go:234] Setting addon volumesnapshots=true in "addons-529439"
	I0916 17:25:44.015519  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015537  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.015220  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015580  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015492  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015650  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015678  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015207  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015732  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015736  379161 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-529439"
	I0916 17:25:44.015754  379161 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-529439"
	I0916 17:25:44.014633  379161 addons.go:69] Setting storage-provisioner=true in profile "addons-529439"
	I0916 17:25:44.015779  379161 addons.go:234] Setting addon storage-provisioner=true in "addons-529439"
	I0916 17:25:44.015879  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.015892  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.015917  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.015986  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.015990  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.016011  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.016263  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.016295  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.016323  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.016337  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.016347  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.016350  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.016361  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.016663  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.016681  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.016724  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.016751  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.017221  379161 out.go:177] * Verifying Kubernetes components...
	I0916 17:25:44.018803  379161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 17:25:44.036443  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I0916 17:25:44.036651  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37753
	I0916 17:25:44.036746  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
	I0916 17:25:44.036821  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33651
	I0916 17:25:44.037095  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0916 17:25:44.041295  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.041332  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.041618  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.041690  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.041745  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34349
	I0916 17:25:44.041800  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.041872  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.043012  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.043031  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.043158  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.043169  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.043296  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.043306  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.043367  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.043434  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.043478  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.043537  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.043953  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.043985  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.049550  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.049576  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.049745  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.049758  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.050198  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.050239  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.051212  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.051289  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.051438  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.051453  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.051522  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.051561  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0916 17:25:44.052645  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.052692  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.054309  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.054425  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.054482  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.054902  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.054922  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.055631  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.055665  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.055803  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
	I0916 17:25:44.059993  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.060018  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.060660  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.061276  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.061300  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.066555  379161 addons.go:234] Setting addon default-storageclass=true in "addons-529439"
	I0916 17:25:44.066609  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.067007  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.067040  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.071024  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41503
	I0916 17:25:44.071620  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.072144  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.072165  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.072575  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.072771  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.074699  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.079281  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 17:25:44.081390  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 17:25:44.082797  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 17:25:44.083453  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45995
	I0916 17:25:44.083477  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0916 17:25:44.084025  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.084116  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.084555  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.084580  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.084677  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.084706  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.084942  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.085047  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 17:25:44.085332  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.085858  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.085887  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.086182  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.086814  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.086834  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.087262  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.087435  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.087554  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.087611  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.088154  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 17:25:44.089034  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0916 17:25:44.089558  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.089787  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
	I0916 17:25:44.090022  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.090438  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.090480  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.091550  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 17:25:44.091994  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.092691  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.092711  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.093153  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.093712  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.093755  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.094113  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 17:25:44.094160  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.094176  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.094920  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.095637  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.095687  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.096575  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 17:25:44.097712  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 17:25:44.097735  379161 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 17:25:44.097759  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.097825  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0916 17:25:44.098273  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.098288  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41545
	I0916 17:25:44.098726  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.098885  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.098896  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.099505  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.099585  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.099611  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.099979  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.099996  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.100192  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.102027  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.102641  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.103602  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.103666  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44187
	I0916 17:25:44.103975  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.104010  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.104218  379161 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 17:25:44.104249  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.104392  379161 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 17:25:44.104411  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.104474  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.104544  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.104812  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.105553  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.105566  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.105713  379161 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 17:25:44.105745  379161 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 17:25:44.105773  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.105721  379161 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 17:25:44.105841  379161 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 17:25:44.105867  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.106064  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.106321  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0916 17:25:44.106889  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.107536  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.107555  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.107954  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.108227  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.108862  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.110500  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.110716  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.111001  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.111021  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.111627  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0916 17:25:44.111653  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.111702  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.111853  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.112299  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.112394  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.112497  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.112681  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.112699  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.112908  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.113048  379161 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 17:25:44.113081  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.113026  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.113288  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.113517  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.114101  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.114529  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.114800  379161 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:25:44.114816  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 17:25:44.114831  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.115588  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0916 17:25:44.116094  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.116129  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.117599  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.119059  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.119076  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.119214  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.119604  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.119630  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.119860  379161 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-529439"
	I0916 17:25:44.119911  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:44.119918  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.120090  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.120224  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.120297  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.120353  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.120341  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.125190  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.125247  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37211
	I0916 17:25:44.125269  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0916 17:25:44.125696  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.125803  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.126313  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.126332  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.126723  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.126744  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.126807  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.127006  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.127177  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.127393  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.127446  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40687
	I0916 17:25:44.128411  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.128461  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.128710  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.129086  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.129838  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.129856  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.130011  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.130638  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.131298  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.131361  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.131912  379161 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 17:25:44.131969  379161 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 17:25:44.133576  379161 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 17:25:44.133596  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 17:25:44.133617  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.134236  379161 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 17:25:44.135698  379161 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 17:25:44.135717  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 17:25:44.135741  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.138824  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42075
	I0916 17:25:44.139227  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.139509  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.139790  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.140025  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.140049  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.140307  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.140328  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.140394  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.140603  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.140624  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.140643  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.140883  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.140898  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.140958  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I0916 17:25:44.141092  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.141183  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.141241  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.141468  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.141481  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.141717  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.142093  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.142109  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.142498  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.142673  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.142693  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.142707  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.144560  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.146469  379161 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 17:25:44.147817  379161 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 17:25:44.147839  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 17:25:44.147862  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.148422  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I0916 17:25:44.148595  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
	I0916 17:25:44.151180  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35857
	I0916 17:25:44.151193  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I0916 17:25:44.151350  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.151359  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.151763  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.151965  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.151980  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.152131  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.152139  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.152154  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.152320  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0916 17:25:44.152522  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.152747  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.152764  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.152946  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.152965  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.153054  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.153129  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.153227  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.154041  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.154063  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.154131  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.154174  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.154484  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.154550  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.156513  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.156606  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.156626  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.156664  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.156683  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.156705  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.156948  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.157351  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.157528  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.159064  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.159135  379161 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 17:25:44.159857  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.159921  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.159977  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38989
	I0916 17:25:44.160139  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.160544  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.160631  379161 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 17:25:44.160647  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 17:25:44.160674  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.160544  379161 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 17:25:44.161568  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.161589  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.161996  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.162073  379161 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 17:25:44.162089  379161 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 17:25:44.162140  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.162492  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.162547  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:44.162584  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:44.162702  379161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 17:25:44.162813  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:44.162829  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:44.164752  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:44.164787  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:44.164794  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:44.164802  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:44.164809  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:44.165329  379161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:25:44.166954  379161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:25:44.167297  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:44.167305  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.167301  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
	I0916 17:25:44.167320  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.167374  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.167396  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.167405  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:44.167416  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.167417  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 17:25:44.167496  379161 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0916 17:25:44.167729  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.167899  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.167996  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.168077  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.168091  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.168123  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.168486  379161 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 17:25:44.168514  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 17:25:44.168533  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.168553  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.168570  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.168641  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.168776  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.168896  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.169032  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.169667  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.169904  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.174227  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.174238  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.174261  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.174277  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.174305  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46519
	I0916 17:25:44.174308  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.174624  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.174804  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.175213  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.175297  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.175608  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I0916 17:25:44.175678  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.175692  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.176031  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.176115  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.176166  379161 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 17:25:44.176528  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.176546  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.176568  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.176892  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.177344  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.178068  379161 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 17:25:44.178092  379161 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 17:25:44.178115  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.178151  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.179254  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.179489  379161 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 17:25:44.179514  379161 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 17:25:44.179532  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.180259  379161 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 17:25:44.181843  379161 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:25:44.181863  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 17:25:44.181879  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.182028  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.182469  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.182500  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.182801  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.183174  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.183291  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.183495  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.183854  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.184146  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.184192  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.184374  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.184506  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.184591  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.184684  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	W0916 17:25:44.185058  379161 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40138->192.168.39.32:22: read: connection reset by peer
	I0916 17:25:44.185091  379161 retry.go:31] will retry after 278.117348ms: ssh: handshake failed: read tcp 192.168.39.1:40138->192.168.39.32:22: read: connection reset by peer
	I0916 17:25:44.185138  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.185585  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.185638  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	W0916 17:25:44.185668  379161 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40146->192.168.39.32:22: read: connection reset by peer
	I0916 17:25:44.185691  379161 retry.go:31] will retry after 157.855849ms: ssh: handshake failed: read tcp 192.168.39.1:40146->192.168.39.32:22: read: connection reset by peer
	I0916 17:25:44.185840  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.186030  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.186156  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.186275  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:44.189395  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0916 17:25:44.189821  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:44.190357  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:44.190378  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:44.190654  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:44.190805  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:44.192127  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:44.194012  379161 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 17:25:44.195456  379161 out.go:177]   - Using image docker.io/busybox:stable
	I0916 17:25:44.196949  379161 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:25:44.196972  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 17:25:44.196997  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:44.199913  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.200212  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:44.200266  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:44.200439  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:44.200644  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:44.200783  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:44.200938  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	W0916 17:25:44.201873  379161 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40156->192.168.39.32:22: read: connection reset by peer
	I0916 17:25:44.201901  379161 retry.go:31] will retry after 283.921ms: ssh: handshake failed: read tcp 192.168.39.1:40156->192.168.39.32:22: read: connection reset by peer
	I0916 17:25:44.571285  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 17:25:44.610817  379161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 17:25:44.610878  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 17:25:44.640457  379161 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 17:25:44.640488  379161 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 17:25:44.642068  379161 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 17:25:44.642099  379161 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 17:25:44.648591  379161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 17:25:44.648635  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 17:25:44.661242  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 17:25:44.674485  379161 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 17:25:44.674519  379161 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 17:25:44.682041  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 17:25:44.682080  379161 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 17:25:44.720436  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 17:25:44.775189  379161 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 17:25:44.775229  379161 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 17:25:44.775309  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 17:25:44.787179  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 17:25:44.810178  379161 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:25:44.810202  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 17:25:44.844003  379161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 17:25:44.844036  379161 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 17:25:44.853750  379161 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 17:25:44.853776  379161 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 17:25:44.879831  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 17:25:44.924390  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 17:25:44.924424  379161 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 17:25:44.928318  379161 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:25:44.928345  379161 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 17:25:44.967165  379161 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:25:44.967201  379161 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 17:25:44.972191  379161 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 17:25:44.972232  379161 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 17:25:45.040423  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 17:25:45.065428  379161 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 17:25:45.065469  379161 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 17:25:45.102757  379161 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 17:25:45.102790  379161 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 17:25:45.103274  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 17:25:45.118320  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 17:25:45.128235  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 17:25:45.128273  379161 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 17:25:45.167209  379161 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 17:25:45.167242  379161 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 17:25:45.178242  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 17:25:45.253552  379161 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 17:25:45.253580  379161 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 17:25:45.279995  379161 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 17:25:45.280029  379161 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 17:25:45.336644  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 17:25:45.336689  379161 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 17:25:45.436728  379161 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:25:45.436761  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 17:25:45.531723  379161 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 17:25:45.531756  379161 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 17:25:45.532896  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 17:25:45.532937  379161 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 17:25:45.642114  379161 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 17:25:45.642152  379161 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 17:25:45.881657  379161 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 17:25:45.881689  379161 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 17:25:45.943383  379161 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 17:25:45.943414  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 17:25:45.972728  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 17:25:46.056769  379161 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 17:25:46.056805  379161 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 17:25:46.078684  379161 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:25:46.078712  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 17:25:46.273879  379161 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 17:25:46.273909  379161 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 17:25:46.386237  379161 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:25:46.386266  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 17:25:46.398244  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 17:25:46.608961  379161 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 17:25:46.608998  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 17:25:46.892929  379161 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 17:25:46.892953  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 17:25:46.952787  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:25:47.062950  379161 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:25:47.062990  379161 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 17:25:47.286053  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 17:25:51.315397  379161 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 17:25:51.315451  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:51.319297  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:51.319913  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:51.319946  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:51.320202  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:51.320473  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:51.320680  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:51.320823  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:51.875488  379161 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 17:25:52.067006  379161 addons.go:234] Setting addon gcp-auth=true in "addons-529439"
	I0916 17:25:52.067086  379161 host.go:66] Checking if "addons-529439" exists ...
	I0916 17:25:52.067569  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:52.067644  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:52.084325  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I0916 17:25:52.084944  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:52.085712  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:52.085742  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:52.086130  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:52.086872  379161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 17:25:52.086935  379161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 17:25:52.103046  379161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I0916 17:25:52.103636  379161 main.go:141] libmachine: () Calling .GetVersion
	I0916 17:25:52.104259  379161 main.go:141] libmachine: Using API Version  1
	I0916 17:25:52.104286  379161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 17:25:52.104723  379161 main.go:141] libmachine: () Calling .GetMachineName
	I0916 17:25:52.104907  379161 main.go:141] libmachine: (addons-529439) Calling .GetState
	I0916 17:25:52.106686  379161 main.go:141] libmachine: (addons-529439) Calling .DriverName
	I0916 17:25:52.106954  379161 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 17:25:52.106983  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHHostname
	I0916 17:25:52.110036  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:52.110488  379161 main.go:141] libmachine: (addons-529439) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:04:15", ip: ""} in network mk-addons-529439: {Iface:virbr1 ExpiryTime:2024-09-16 18:25:14 +0000 UTC Type:0 Mac:52:54:00:7f:04:15 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:addons-529439 Clientid:01:52:54:00:7f:04:15}
	I0916 17:25:52.110517  379161 main.go:141] libmachine: (addons-529439) DBG | domain addons-529439 has defined IP address 192.168.39.32 and MAC address 52:54:00:7f:04:15 in network mk-addons-529439
	I0916 17:25:52.110692  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHPort
	I0916 17:25:52.110866  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHKeyPath
	I0916 17:25:52.111035  379161 main.go:141] libmachine: (addons-529439) Calling .GetSSHUsername
	I0916 17:25:52.111158  379161 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/addons-529439/id_rsa Username:docker}
	I0916 17:25:53.300424  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.729083602s)
	I0916 17:25:53.300464  379161 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.689547614s)
	I0916 17:25:53.300495  379161 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 17:25:53.300503  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300497  379161 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.689646055s)
	I0916 17:25:53.300586  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.639305032s)
	I0916 17:25:53.300518  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300630  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300647  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300690  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.580223345s)
	I0916 17:25:53.300742  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300752  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300776  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.525441954s)
	I0916 17:25:53.300796  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300805  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300832  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.513618222s)
	I0916 17:25:53.300848  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300857  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300879  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.421014395s)
	I0916 17:25:53.300895  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300904  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300926  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.260459331s)
	I0916 17:25:53.300942  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.300951  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.300997  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.197704143s)
	I0916 17:25:53.301265  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.301272  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.123000997s)
	I0916 17:25:53.301189  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.182821508s)
	I0916 17:25:53.301297  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.301303  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.301309  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.301316  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.301280  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.301416  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.328647399s)
	I0916 17:25:53.301434  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.301443  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.301515  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.90322813s)
	I0916 17:25:53.301537  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.301547  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.301571  379161 node_ready.go:35] waiting up to 6m0s for node "addons-529439" to be "Ready" ...
	I0916 17:25:53.301671  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.348851321s)
	W0916 17:25:53.301698  379161 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:25:53.301717  379161 retry.go:31] will retry after 244.977481ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 17:25:53.303314  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.303334  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.303337  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.303355  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.303356  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.303363  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.303371  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.303381  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.303387  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.303395  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.303401  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.303451  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.303460  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.303468  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.303474  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.303537  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.303561  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.303569  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.303576  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.303582  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.303726  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.303770  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.303777  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.303785  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.303791  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.303928  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.303953  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.303959  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.304164  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.304186  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.304193  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.304246  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.304269  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.304275  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.304282  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.304289  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.304342  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.304365  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.304372  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.304380  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.304386  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.304758  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.304789  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.304795  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.304802  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.304809  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.304858  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.304880  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.304886  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.304893  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.304899  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.305180  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.305207  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.305221  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.305229  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.305236  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.305282  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.305303  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.305309  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.305315  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.305322  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.305433  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.305456  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.305463  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.305466  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.305488  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.305511  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.305534  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.305542  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.305549  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.305556  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.305574  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.305601  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.305608  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.306995  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.307022  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.307027  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.307311  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.307411  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.307420  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.307429  379161 addons.go:475] Verifying addon ingress=true in "addons-529439"
	I0916 17:25:53.308677  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.308747  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.308767  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.308786  379161 addons.go:475] Verifying addon registry=true in "addons-529439"
	I0916 17:25:53.308811  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.308766  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.308861  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.309703  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.308911  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.308951  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.309764  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.309863  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:53.309900  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.309911  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.308976  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.310010  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.310032  379161 out.go:177] * Verifying ingress addon...
	I0916 17:25:53.310070  379161 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-529439 service yakd-dashboard -n yakd-dashboard
	
	I0916 17:25:53.305473  379161 addons.go:475] Verifying addon metrics-server=true in "addons-529439"
	I0916 17:25:53.311199  379161 out.go:177] * Verifying registry addon...
	I0916 17:25:53.312859  379161 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0916 17:25:53.313646  379161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 17:25:53.337198  379161 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 17:25:53.337227  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:53.337268  379161 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 17:25:53.337294  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:53.368191  379161 node_ready.go:49] node "addons-529439" has status "Ready":"True"
	I0916 17:25:53.368223  379161 node_ready.go:38] duration metric: took 66.627998ms for node "addons-529439" to be "Ready" ...
	I0916 17:25:53.368235  379161 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:25:53.412697  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.412725  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.413036  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.413058  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	W0916 17:25:53.413168  379161 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 17:25:53.413584  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:53.413607  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:53.413881  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:53.413899  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:53.454019  379161 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4qtrf" in "kube-system" namespace to be "Ready" ...
	I0916 17:25:53.547806  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 17:25:53.551329  379161 pod_ready.go:93] pod "coredns-7c65d6cfc9-4qtrf" in "kube-system" namespace has status "Ready":"True"
	I0916 17:25:53.551360  379161 pod_ready.go:82] duration metric: took 97.303438ms for pod "coredns-7c65d6cfc9-4qtrf" in "kube-system" namespace to be "Ready" ...
	I0916 17:25:53.551374  379161 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace to be "Ready" ...
	I0916 17:25:53.805223  379161 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-529439" context rescaled to 1 replicas
	I0916 17:25:53.818053  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:53.818457  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:54.318105  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:54.318154  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:54.783806  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.497696995s)
	I0916 17:25:54.783867  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:54.783893  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:54.783875  379161 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.676895009s)
	I0916 17:25:54.784189  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:54.784230  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:54.784241  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:54.784250  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:54.784273  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:54.784490  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:54.784506  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:54.784517  379161 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-529439"
	I0916 17:25:54.785859  379161 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 17:25:54.785859  379161 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 17:25:54.788981  379161 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 17:25:54.789732  379161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 17:25:54.790696  379161 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 17:25:54.790790  379161 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 17:25:54.812539  379161 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 17:25:54.812563  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:54.832778  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:54.833161  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:54.898598  379161 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 17:25:54.898635  379161 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 17:25:54.972121  379161 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:25:54.972157  379161 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 17:25:55.073603  379161 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 17:25:55.295748  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:55.318781  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:55.319074  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:55.557666  379161 pod_ready.go:103] pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace has status "Ready":"False"
	I0916 17:25:55.629334  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.081470998s)
	I0916 17:25:55.629409  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:55.629424  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:55.629812  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:55.629830  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:55.629845  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:55.629853  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:55.630103  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:55.630141  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:55.630151  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:55.795652  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:55.818011  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:55.818635  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:56.295896  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:56.341544  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:56.341952  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:56.468930  379161 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.395262386s)
	I0916 17:25:56.468995  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:56.469010  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:56.469363  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:56.469396  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:56.469396  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:56.469413  379161 main.go:141] libmachine: Making call to close driver server
	I0916 17:25:56.469422  379161 main.go:141] libmachine: (addons-529439) Calling .Close
	I0916 17:25:56.469704  379161 main.go:141] libmachine: Successfully made call to close driver server
	I0916 17:25:56.469725  379161 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 17:25:56.469754  379161 main.go:141] libmachine: (addons-529439) DBG | Closing plugin on server side
	I0916 17:25:56.472245  379161 addons.go:475] Verifying addon gcp-auth=true in "addons-529439"
	I0916 17:25:56.474201  379161 out.go:177] * Verifying gcp-auth addon...
	I0916 17:25:56.476182  379161 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 17:25:56.495944  379161 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 17:25:56.495968  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:56.796398  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:56.818418  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:56.818556  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:56.992872  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:57.295684  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:57.319423  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:57.321011  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:57.480023  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:57.557990  379161 pod_ready.go:103] pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace has status "Ready":"False"
	I0916 17:25:57.795948  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:57.818172  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:57.818511  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:57.980413  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:58.295341  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:58.318422  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:58.319753  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:58.479913  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:58.795425  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:58.830512  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:58.830919  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:58.987044  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:59.294930  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:59.318316  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:59.319475  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:59.480675  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:25:59.558700  379161 pod_ready.go:103] pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace has status "Ready":"False"
	I0916 17:25:59.797087  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:25:59.818324  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:25:59.818893  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:25:59.980023  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:00.294420  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:00.320852  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:00.322091  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:00.480351  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:00.794978  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:00.818047  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:00.818117  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:00.981149  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:01.297456  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:01.317735  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:01.317781  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:01.479665  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:01.795764  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:01.817318  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:01.818454  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:01.982256  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:02.059513  379161 pod_ready.go:98] pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:26:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.32 HostIPs:[{IP:192.168.39.
32}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 17:25:44 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 17:25:49 +0000 UTC,FinishedAt:2024-09-16 17:25:58 +0000 UTC,ContainerID:cri-o://311bc1111b09038fb3833478e6ac72dfd8544b3fb3a0a64d8de9eaede93245aa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://311bc1111b09038fb3833478e6ac72dfd8544b3fb3a0a64d8de9eaede93245aa Started:0xc00289a410 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002a541f0} {Name:kube-api-access-s2wvb MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002a54200}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 17:26:02.059550  379161 pod_ready.go:82] duration metric: took 8.508167871s for pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace to be "Ready" ...
	E0916 17:26:02.059567  379161 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-d4zwb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:26:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-16 17:25:44 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.32 HostIPs:[{IP:192.168.39.32}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-16 17:25:44 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-16 17:25:49 +0000 UTC,FinishedAt:2024-09-16 17:25:58 +0000 UTC,ContainerID:cri-o://311bc1111b09038fb3833478e6ac72dfd8544b3fb3a0a64d8de9eaede93245aa,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://311bc1111b09038fb3833478e6ac72dfd8544b3fb3a0a64d8de9eaede93245aa Started:0xc00289a410 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002a541f0} {Name:kube-api-access-s2wvb MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc002a54200}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0916 17:26:02.059581  379161 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.067717  379161 pod_ready.go:93] pod "etcd-addons-529439" in "kube-system" namespace has status "Ready":"True"
	I0916 17:26:02.067755  379161 pod_ready.go:82] duration metric: took 8.163663ms for pod "etcd-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.067772  379161 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.075050  379161 pod_ready.go:93] pod "kube-apiserver-addons-529439" in "kube-system" namespace has status "Ready":"True"
	I0916 17:26:02.075084  379161 pod_ready.go:82] duration metric: took 7.303199ms for pod "kube-apiserver-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.075097  379161 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.081307  379161 pod_ready.go:93] pod "kube-controller-manager-addons-529439" in "kube-system" namespace has status "Ready":"True"
	I0916 17:26:02.081336  379161 pod_ready.go:82] duration metric: took 6.230291ms for pod "kube-controller-manager-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.081352  379161 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ltq47" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.102264  379161 pod_ready.go:93] pod "kube-proxy-ltq47" in "kube-system" namespace has status "Ready":"True"
	I0916 17:26:02.102294  379161 pod_ready.go:82] duration metric: took 20.933799ms for pod "kube-proxy-ltq47" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.102307  379161 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.296507  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:02.318455  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:02.319139  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:02.455699  379161 pod_ready.go:93] pod "kube-scheduler-addons-529439" in "kube-system" namespace has status "Ready":"True"
	I0916 17:26:02.455743  379161 pod_ready.go:82] duration metric: took 353.42599ms for pod "kube-scheduler-addons-529439" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.455758  379161 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace to be "Ready" ...
	I0916 17:26:02.482346  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:02.797834  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:02.820942  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:02.821742  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:02.980411  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:03.294369  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:03.317940  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:03.318090  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:03.479525  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:03.795453  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:03.817801  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:03.817944  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:03.980822  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:04.294130  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:04.319041  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:04.319071  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:04.463634  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:04.480497  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:04.794776  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:04.816988  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:04.817868  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:04.981736  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:05.295934  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:05.317064  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:05.319054  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:05.480577  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:05.815377  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:05.820571  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:05.820832  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:05.979615  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:06.297261  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:06.316910  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:06.318920  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:06.479554  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:06.795972  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:06.819013  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:06.820144  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:06.963617  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:06.979765  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:07.296221  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:07.318498  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:07.319228  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:07.480746  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:07.797813  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:07.894500  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:07.894546  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:07.980249  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:08.295029  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:08.318329  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:08.319081  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:08.480107  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:08.795324  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:08.817115  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:08.821388  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:08.981435  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:09.294497  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:09.318899  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:09.319192  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:09.463263  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:09.480193  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:09.794620  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:09.817787  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:09.818544  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:09.981621  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:10.296206  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:10.318193  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:10.318269  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:10.479832  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:10.794946  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:10.818217  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:10.818763  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:10.990693  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:11.295217  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:11.318866  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:11.319787  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:11.481022  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:11.794695  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:11.817981  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:11.818580  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:11.963674  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:11.980180  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:12.295908  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:12.318388  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:12.318456  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:12.480789  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:12.795037  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:12.816992  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:12.818013  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:12.980291  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:13.295864  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:13.318330  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:13.319050  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:13.480436  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:13.794822  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:13.817468  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:13.817475  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:13.965394  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:13.987938  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:14.295873  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:14.317514  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:14.317927  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:14.480351  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:14.796315  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:14.817711  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:14.817770  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:14.980746  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:15.295526  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:15.317858  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:15.318301  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:15.480293  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:15.794671  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:15.818389  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:15.818728  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:15.980310  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:16.294645  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:16.318799  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:16.318818  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:16.465540  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:16.479621  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:16.795468  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:16.819310  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:16.819723  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:16.979697  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:17.295816  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:17.317995  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:17.318468  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:17.480644  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:17.794677  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:17.818982  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:17.819176  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:18.298869  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:18.300635  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:18.322834  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:18.323344  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:18.479831  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:18.795260  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:18.817547  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:18.819692  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:18.963195  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:18.980178  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:19.294394  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:19.318535  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:19.318558  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:19.479324  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:19.794888  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:19.817062  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:19.817721  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:19.980689  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:20.296265  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:20.396968  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:20.397165  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:20.481786  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:20.794563  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:20.817300  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:20.818064  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:20.979805  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:21.296405  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:21.317428  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:21.318001  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:21.463905  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:21.479913  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:21.794305  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:21.816526  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:21.818403  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:21.979960  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:22.294210  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:22.318795  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:22.323845  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:22.480272  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:22.794471  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:22.818638  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:22.819032  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:22.979883  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:23.295401  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:23.317931  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:23.318169  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:23.464983  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:23.479916  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:23.908990  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:23.910222  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:23.911540  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:23.980857  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:24.294922  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:24.317099  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:24.317727  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:24.479811  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:24.794771  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:24.817756  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:24.818309  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:24.979752  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:25.295508  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:25.317889  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:25.318035  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:25.479756  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:25.795202  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:25.820056  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:25.820108  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:25.962920  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:25.980536  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:26.295115  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:26.317837  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:26.318194  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:26.480036  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:26.794709  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:26.818246  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:26.818622  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:26.980482  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:27.294955  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:27.317703  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:27.318086  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:27.479649  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:27.982562  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:27.983758  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:27.983979  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:27.985908  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:27.986079  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:28.296031  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:28.317785  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:28.318125  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:28.481772  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:28.794770  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:28.818244  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:28.818541  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:28.982787  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:29.295315  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:29.317592  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:29.317886  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:29.480132  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:29.795154  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:29.819691  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:29.820142  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:29.979558  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:30.296512  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:30.317810  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:30.317935  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:30.462762  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:30.480617  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:30.795180  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:30.818456  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:30.819411  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:30.979972  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:31.294072  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:31.318055  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:31.318335  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:31.479937  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:31.795114  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:31.816943  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:31.819202  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:31.980304  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:32.295055  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:32.322583  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:32.322696  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 17:26:32.463520  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:32.479951  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:32.795728  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:32.816798  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:32.818173  379161 kapi.go:107] duration metric: took 39.504522466s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 17:26:32.979776  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:33.294909  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:33.317710  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:33.479971  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:33.795282  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:33.817231  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:33.980007  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:34.295499  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:34.317303  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:34.463676  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:34.480480  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:34.795092  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:34.817104  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:34.979419  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:35.295133  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:35.317202  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:35.480182  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:35.795002  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:35.818439  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:35.979976  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:36.296384  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:36.317439  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:36.931787  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:36.932515  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:36.932580  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:36.933625  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:36.980119  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:37.293964  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:37.317180  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:37.479889  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:37.795277  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:37.817905  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:37.979729  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:38.299238  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:38.323631  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:38.479596  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:38.797337  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:38.816907  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:38.961603  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:38.980114  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:39.295405  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:39.317688  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:39.479894  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:39.793910  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:39.817050  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:39.980243  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:40.295032  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:40.318375  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:40.479465  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:40.796855  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:40.895734  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:40.962943  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:40.997460  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:41.294724  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:41.317469  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:41.480510  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:41.794812  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:41.816583  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:42.276376  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:42.294733  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:42.318802  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:42.486054  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:42.801174  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:42.821489  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:42.964998  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:42.980420  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:43.294827  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:43.317254  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:43.479651  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:43.795891  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:43.821122  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:43.980054  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:44.295873  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:44.317126  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:44.479912  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:44.794496  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:44.821481  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:44.981524  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:45.294963  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:45.317261  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:45.465518  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:45.480188  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:45.795169  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:45.818040  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:45.980630  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:46.295189  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:46.317370  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:46.481572  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:46.794678  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:46.818178  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:47.291683  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:47.295252  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:47.318264  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:47.480253  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:47.794644  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:47.817531  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:47.962264  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:47.981246  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:48.295644  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:48.317613  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:48.504856  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:48.796485  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:48.818371  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:49.034740  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:49.295534  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:49.397994  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:49.479884  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:49.799661  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:49.819514  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:49.968566  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:49.981126  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:50.294698  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:50.317985  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:50.483486  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:50.796182  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:50.817878  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:50.980089  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:51.295069  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:51.622718  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:51.623031  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:51.794729  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:51.819482  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:51.982435  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:52.294827  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:52.317197  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:52.462170  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:52.480044  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:52.794657  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:52.817641  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:52.980135  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:53.294756  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:53.317719  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:53.488941  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:53.794688  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:53.818818  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:53.980142  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:54.295106  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:54.317007  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:54.462327  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:54.479980  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:54.794154  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:54.818494  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:54.980141  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:55.294795  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:55.318307  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:55.481095  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:55.794609  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:55.817360  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:55.990630  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:56.297845  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:56.321673  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:56.480176  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:56.794624  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:56.817442  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:56.962887  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:56.980168  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:57.295096  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:57.317730  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:57.480513  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:57.797551  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:57.899736  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:57.995493  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:58.296093  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:58.320976  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:58.480722  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:58.796664  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:58.818086  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:58.965659  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:26:58.979630  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:59.295747  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:59.317918  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:59.481137  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:26:59.795715  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:26:59.819175  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:26:59.984908  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:00.296643  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:00.320084  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:00.482055  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:00.796399  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:00.817209  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:00.980951  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:01.296366  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:01.318882  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:01.462305  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:01.479960  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:01.795361  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:01.816711  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:01.980531  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:02.298167  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:02.318874  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:02.480680  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:02.796861  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:02.817469  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:02.980019  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:03.294883  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:03.317706  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:03.471326  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:03.514993  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:03.801690  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:03.817971  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:03.979530  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:04.296234  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:04.318550  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:04.479528  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:04.794714  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:04.818346  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:04.979847  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:05.296245  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:05.318035  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:05.480269  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:05.795003  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:05.817594  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:05.965711  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:05.980792  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:06.295853  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:06.317241  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:06.480005  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:07.177281  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:07.178309  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:07.179203  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:07.295352  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:07.317599  379161 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 17:27:07.481255  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:07.796784  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:07.819451  379161 kapi.go:107] duration metric: took 1m14.506584767s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 17:27:07.979266  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:08.295055  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:08.462310  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:08.480446  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:08.803325  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:09.001736  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:09.295474  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:09.480455  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:09.795635  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:09.980077  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:10.294820  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:10.463101  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:10.480079  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:10.796440  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:10.980691  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:11.295552  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:11.480044  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:11.794769  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:11.979851  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:12.293907  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:12.479942  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:12.794851  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:12.963052  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:12.980309  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:13.296757  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:13.480364  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 17:27:13.795479  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:13.980396  379161 kapi.go:107] duration metric: took 1m17.504206791s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 17:27:13.982682  379161 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-529439 cluster.
	I0916 17:27:13.984235  379161 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 17:27:13.985516  379161 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 17:27:14.294910  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:14.795461  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:15.295115  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:15.462962  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:15.794946  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:16.294919  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:16.795198  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:17.295266  379161 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 17:27:17.464706  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:17.795701  379161 kapi.go:107] duration metric: took 1m23.005962393s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 17:27:17.797754  379161 out.go:177] * Enabled addons: helm-tiller, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0916 17:27:17.799258  379161 addons.go:510] duration metric: took 1m33.784764071s for enable addons: enabled=[helm-tiller nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0916 17:27:19.962652  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:21.962782  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:23.963633  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:25.965793  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:28.462247  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:30.963484  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:33.462868  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:35.962993  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:37.963158  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:40.462773  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:42.464291  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:44.962389  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:46.963724  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:49.462537  379161 pod_ready.go:103] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"False"
	I0916 17:27:50.962060  379161 pod_ready.go:93] pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace has status "Ready":"True"
	I0916 17:27:50.962090  379161 pod_ready.go:82] duration metric: took 1m48.506324143s for pod "metrics-server-84c5f94fbc-cb4vj" in "kube-system" namespace to be "Ready" ...
	I0916 17:27:50.962105  379161 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7zwwg" in "kube-system" namespace to be "Ready" ...
	I0916 17:27:50.967673  379161 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7zwwg" in "kube-system" namespace has status "Ready":"True"
	I0916 17:27:50.967701  379161 pod_ready.go:82] duration metric: took 5.586968ms for pod "nvidia-device-plugin-daemonset-7zwwg" in "kube-system" namespace to be "Ready" ...
	I0916 17:27:50.967720  379161 pod_ready.go:39] duration metric: took 1m57.599470466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 17:27:50.967741  379161 api_server.go:52] waiting for apiserver process to appear ...
	I0916 17:27:50.967778  379161 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 17:27:50.967846  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 17:27:51.018812  379161 cri.go:89] found id: "32f0b191bbd780b338ebcd15637b6659e57a342a12050cd11b485582366f04a1"
	I0916 17:27:51.018840  379161 cri.go:89] found id: ""
	I0916 17:27:51.018850  379161 logs.go:276] 1 containers: [32f0b191bbd780b338ebcd15637b6659e57a342a12050cd11b485582366f04a1]
	I0916 17:27:51.018905  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:27:51.023906  379161 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 17:27:51.023997  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 17:27:51.069259  379161 cri.go:89] found id: "1838b392d04372e96b4e7e4e5aa574954b2fbe6b7388399b4a08a06416d64e6f"
	I0916 17:27:51.069286  379161 cri.go:89] found id: ""
	I0916 17:27:51.069296  379161 logs.go:276] 1 containers: [1838b392d04372e96b4e7e4e5aa574954b2fbe6b7388399b4a08a06416d64e6f]
	I0916 17:27:51.069383  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:27:51.074038  379161 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 17:27:51.074117  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 17:27:51.117443  379161 cri.go:89] found id: "050715bdad5c19880338e313642915b55b8bff6db4faa83323cbc82d49e044fb"
	I0916 17:27:51.117473  379161 cri.go:89] found id: ""
	I0916 17:27:51.117481  379161 logs.go:276] 1 containers: [050715bdad5c19880338e313642915b55b8bff6db4faa83323cbc82d49e044fb]
	I0916 17:27:51.117535  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:27:51.121986  379161 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 17:27:51.122074  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 17:27:51.164572  379161 cri.go:89] found id: "7ceae19336a9505bd1b31635dbb8b1c34ff6252aef22c93e5a04a5d5c9db8066"
	I0916 17:27:51.164604  379161 cri.go:89] found id: ""
	I0916 17:27:51.164614  379161 logs.go:276] 1 containers: [7ceae19336a9505bd1b31635dbb8b1c34ff6252aef22c93e5a04a5d5c9db8066]
	I0916 17:27:51.164692  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:27:51.169115  379161 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 17:27:51.169187  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 17:27:51.217899  379161 cri.go:89] found id: "8c4aa624ab5c49fe220f9e7efac9ea16400e72080b018392ef69a68cd8b2dd07"
	I0916 17:27:51.217928  379161 cri.go:89] found id: ""
	I0916 17:27:51.217936  379161 logs.go:276] 1 containers: [8c4aa624ab5c49fe220f9e7efac9ea16400e72080b018392ef69a68cd8b2dd07]
	I0916 17:27:51.217993  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:27:51.223135  379161 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 17:27:51.223211  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 17:27:51.264612  379161 cri.go:89] found id: "739db28d0f0c0f65e70b61201682ab1f429f958d458534785aa66c78a91cfd3b"
	I0916 17:27:51.264638  379161 cri.go:89] found id: ""
	I0916 17:27:51.264645  379161 logs.go:276] 1 containers: [739db28d0f0c0f65e70b61201682ab1f429f958d458534785aa66c78a91cfd3b]
	I0916 17:27:51.264702  379161 ssh_runner.go:195] Run: which crictl
	I0916 17:27:51.269184  379161 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 17:27:51.269279  379161 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 17:27:51.312202  379161 cri.go:89] found id: ""
	I0916 17:27:51.312239  379161 logs.go:276] 0 containers: []
	W0916 17:27:51.312252  379161 logs.go:278] No container was found matching "kindnet"
	I0916 17:27:51.312277  379161 logs.go:123] Gathering logs for kube-scheduler [7ceae19336a9505bd1b31635dbb8b1c34ff6252aef22c93e5a04a5d5c9db8066] ...
	I0916 17:27:51.312303  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ceae19336a9505bd1b31635dbb8b1c34ff6252aef22c93e5a04a5d5c9db8066"
	I0916 17:27:51.361462  379161 logs.go:123] Gathering logs for kube-proxy [8c4aa624ab5c49fe220f9e7efac9ea16400e72080b018392ef69a68cd8b2dd07] ...
	I0916 17:27:51.361505  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c4aa624ab5c49fe220f9e7efac9ea16400e72080b018392ef69a68cd8b2dd07"
	I0916 17:27:51.401113  379161 logs.go:123] Gathering logs for kube-controller-manager [739db28d0f0c0f65e70b61201682ab1f429f958d458534785aa66c78a91cfd3b] ...
	I0916 17:27:51.401145  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 739db28d0f0c0f65e70b61201682ab1f429f958d458534785aa66c78a91cfd3b"
	I0916 17:27:51.472134  379161 logs.go:123] Gathering logs for CRI-O ...
	I0916 17:27:51.472175  379161 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-529439 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 node stop m02 -v=7 --alsologtostderr
E0916 18:14:37.962041  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:15:18.924175  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.500845751s)

                                                
                                                
-- stdout --
	* Stopping node "ha-365438-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:14:26.673918  396831 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:14:26.674251  396831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:14:26.674267  396831 out.go:358] Setting ErrFile to fd 2...
	I0916 18:14:26.674274  396831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:14:26.674816  396831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:14:26.675177  396831 mustload.go:65] Loading cluster: ha-365438
	I0916 18:14:26.675763  396831 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:14:26.675791  396831 stop.go:39] StopHost: ha-365438-m02
	I0916 18:14:26.676229  396831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:14:26.676280  396831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:14:26.692322  396831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43231
	I0916 18:14:26.692828  396831 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:14:26.693379  396831 main.go:141] libmachine: Using API Version  1
	I0916 18:14:26.693402  396831 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:14:26.693755  396831 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:14:26.696184  396831 out.go:177] * Stopping node "ha-365438-m02"  ...
	I0916 18:14:26.697697  396831 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 18:14:26.697746  396831 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:14:26.698052  396831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 18:14:26.698079  396831 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:14:26.701254  396831 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:14:26.701770  396831 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:14:26.701807  396831 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:14:26.701989  396831 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:14:26.702175  396831 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:14:26.702313  396831 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:14:26.702429  396831 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:14:26.797285  396831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 18:14:26.853588  396831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 18:14:26.912381  396831 main.go:141] libmachine: Stopping "ha-365438-m02"...
	I0916 18:14:26.912410  396831 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:14:26.914027  396831 main.go:141] libmachine: (ha-365438-m02) Calling .Stop
	I0916 18:14:26.917593  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 0/120
	I0916 18:14:27.919798  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 1/120
	I0916 18:14:28.921814  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 2/120
	I0916 18:14:29.923399  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 3/120
	I0916 18:14:30.924650  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 4/120
	I0916 18:14:31.926793  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 5/120
	I0916 18:14:32.928078  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 6/120
	I0916 18:14:33.929457  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 7/120
	I0916 18:14:34.931509  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 8/120
	I0916 18:14:35.933441  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 9/120
	I0916 18:14:36.935875  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 10/120
	I0916 18:14:37.937618  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 11/120
	I0916 18:14:38.939641  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 12/120
	I0916 18:14:39.940935  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 13/120
	I0916 18:14:40.942551  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 14/120
	I0916 18:14:41.944601  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 15/120
	I0916 18:14:42.946017  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 16/120
	I0916 18:14:43.947201  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 17/120
	I0916 18:14:44.948724  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 18/120
	I0916 18:14:45.950101  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 19/120
	I0916 18:14:46.952384  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 20/120
	I0916 18:14:47.953935  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 21/120
	I0916 18:14:48.955439  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 22/120
	I0916 18:14:49.956961  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 23/120
	I0916 18:14:50.958466  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 24/120
	I0916 18:14:51.960716  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 25/120
	I0916 18:14:52.962427  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 26/120
	I0916 18:14:53.964635  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 27/120
	I0916 18:14:54.965944  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 28/120
	I0916 18:14:55.967450  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 29/120
	I0916 18:14:56.969253  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 30/120
	I0916 18:14:57.971078  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 31/120
	I0916 18:14:58.973119  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 32/120
	I0916 18:14:59.975429  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 33/120
	I0916 18:15:00.977126  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 34/120
	I0916 18:15:01.979128  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 35/120
	I0916 18:15:02.980842  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 36/120
	I0916 18:15:03.982190  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 37/120
	I0916 18:15:04.983644  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 38/120
	I0916 18:15:05.985882  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 39/120
	I0916 18:15:06.988038  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 40/120
	I0916 18:15:07.989457  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 41/120
	I0916 18:15:08.991800  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 42/120
	I0916 18:15:09.993271  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 43/120
	I0916 18:15:10.995315  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 44/120
	I0916 18:15:11.997146  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 45/120
	I0916 18:15:12.999389  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 46/120
	I0916 18:15:14.000709  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 47/120
	I0916 18:15:15.002703  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 48/120
	I0916 18:15:16.004542  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 49/120
	I0916 18:15:17.006442  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 50/120
	I0916 18:15:18.007996  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 51/120
	I0916 18:15:19.009351  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 52/120
	I0916 18:15:20.011670  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 53/120
	I0916 18:15:21.013379  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 54/120
	I0916 18:15:22.015453  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 55/120
	I0916 18:15:23.016887  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 56/120
	I0916 18:15:24.019187  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 57/120
	I0916 18:15:25.020794  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 58/120
	I0916 18:15:26.022160  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 59/120
	I0916 18:15:27.024396  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 60/120
	I0916 18:15:28.026008  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 61/120
	I0916 18:15:29.027453  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 62/120
	I0916 18:15:30.028840  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 63/120
	I0916 18:15:31.030333  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 64/120
	I0916 18:15:32.032274  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 65/120
	I0916 18:15:33.033662  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 66/120
	I0916 18:15:34.035812  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 67/120
	I0916 18:15:35.037286  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 68/120
	I0916 18:15:36.039715  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 69/120
	I0916 18:15:37.041757  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 70/120
	I0916 18:15:38.042985  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 71/120
	I0916 18:15:39.045230  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 72/120
	I0916 18:15:40.047625  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 73/120
	I0916 18:15:41.049913  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 74/120
	I0916 18:15:42.051591  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 75/120
	I0916 18:15:43.053130  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 76/120
	I0916 18:15:44.055368  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 77/120
	I0916 18:15:45.056737  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 78/120
	I0916 18:15:46.058036  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 79/120
	I0916 18:15:47.060748  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 80/120
	I0916 18:15:48.062442  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 81/120
	I0916 18:15:49.064603  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 82/120
	I0916 18:15:50.067111  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 83/120
	I0916 18:15:51.068816  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 84/120
	I0916 18:15:52.070525  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 85/120
	I0916 18:15:53.072745  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 86/120
	I0916 18:15:54.075094  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 87/120
	I0916 18:15:55.076451  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 88/120
	I0916 18:15:56.078586  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 89/120
	I0916 18:15:57.081208  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 90/120
	I0916 18:15:58.083519  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 91/120
	I0916 18:15:59.084911  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 92/120
	I0916 18:16:00.086343  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 93/120
	I0916 18:16:01.088638  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 94/120
	I0916 18:16:02.090838  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 95/120
	I0916 18:16:03.092451  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 96/120
	I0916 18:16:04.093944  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 97/120
	I0916 18:16:05.095552  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 98/120
	I0916 18:16:06.097080  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 99/120
	I0916 18:16:07.098194  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 100/120
	I0916 18:16:08.099540  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 101/120
	I0916 18:16:09.100979  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 102/120
	I0916 18:16:10.102483  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 103/120
	I0916 18:16:11.104821  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 104/120
	I0916 18:16:12.106419  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 105/120
	I0916 18:16:13.107945  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 106/120
	I0916 18:16:14.109423  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 107/120
	I0916 18:16:15.110824  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 108/120
	I0916 18:16:16.112635  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 109/120
	I0916 18:16:17.114706  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 110/120
	I0916 18:16:18.116085  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 111/120
	I0916 18:16:19.117441  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 112/120
	I0916 18:16:20.119560  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 113/120
	I0916 18:16:21.121089  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 114/120
	I0916 18:16:22.123347  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 115/120
	I0916 18:16:23.124977  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 116/120
	I0916 18:16:24.126171  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 117/120
	I0916 18:16:25.127921  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 118/120
	I0916 18:16:26.129343  396831 main.go:141] libmachine: (ha-365438-m02) Waiting for machine to stop 119/120
	I0916 18:16:27.130006  396831 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 18:16:27.130193  396831 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-365438 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
E0916 18:16:40.846448  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (19.054521871s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:16:27.178728  397263 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:16:27.178840  397263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:16:27.178846  397263 out.go:358] Setting ErrFile to fd 2...
	I0916 18:16:27.178851  397263 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:16:27.179041  397263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:16:27.179238  397263 out.go:352] Setting JSON to false
	I0916 18:16:27.179274  397263 mustload.go:65] Loading cluster: ha-365438
	I0916 18:16:27.179415  397263 notify.go:220] Checking for updates...
	I0916 18:16:27.179712  397263 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:16:27.179729  397263 status.go:255] checking status of ha-365438 ...
	I0916 18:16:27.180147  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:27.180207  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:27.196872  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41489
	I0916 18:16:27.197534  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:27.198269  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:27.198301  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:27.198659  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:27.198893  397263 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:16:27.200621  397263 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:16:27.200638  397263 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:16:27.200941  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:27.200982  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:27.216834  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46101
	I0916 18:16:27.217429  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:27.217936  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:27.217961  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:27.218324  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:27.218526  397263 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:16:27.221806  397263 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:27.222317  397263 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:16:27.222360  397263 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:27.222547  397263 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:16:27.222870  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:27.222920  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:27.238301  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46791
	I0916 18:16:27.238739  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:27.239240  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:27.239264  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:27.239585  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:27.239787  397263 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:16:27.239984  397263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:27.240029  397263 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:16:27.243186  397263 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:27.243679  397263 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:16:27.243707  397263 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:27.243885  397263 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:16:27.244055  397263 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:16:27.244219  397263 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:16:27.244368  397263 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:16:27.340668  397263 ssh_runner.go:195] Run: systemctl --version
	I0916 18:16:27.351354  397263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:27.372551  397263 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:16:27.372595  397263 api_server.go:166] Checking apiserver status ...
	I0916 18:16:27.372639  397263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:16:27.392142  397263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:16:27.403470  397263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:16:27.403562  397263 ssh_runner.go:195] Run: ls
	I0916 18:16:27.408696  397263 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:16:27.413707  397263 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:16:27.413737  397263 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:16:27.413749  397263 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:16:27.413767  397263 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:16:27.414215  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:27.414266  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:27.430341  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39717
	I0916 18:16:27.430897  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:27.431383  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:27.431403  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:27.431759  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:27.431955  397263 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:16:27.433633  397263 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:16:27.433649  397263 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:16:27.433934  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:27.433970  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:27.449658  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0916 18:16:27.450105  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:27.450611  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:27.450633  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:27.450914  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:27.451080  397263 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:16:27.453828  397263 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:27.454203  397263 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:16:27.454228  397263 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:27.454400  397263 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:16:27.454718  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:27.454756  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:27.469873  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0916 18:16:27.470393  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:27.470926  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:27.470947  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:27.471308  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:27.471497  397263 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:16:27.471689  397263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:27.471710  397263 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:16:27.474897  397263 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:27.475241  397263 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:16:27.475265  397263 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:27.475442  397263 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:16:27.475620  397263 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:16:27.475779  397263 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:16:27.475900  397263 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:16:45.809218  397263 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:16:45.809326  397263 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:16:45.809341  397263 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:16:45.809348  397263 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:16:45.809379  397263 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:16:45.809404  397263 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:16:45.809740  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:45.809793  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:45.825042  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42617
	I0916 18:16:45.825619  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:45.826117  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:45.826148  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:45.826454  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:45.826632  397263 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:16:45.828180  397263 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:16:45.828201  397263 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:16:45.828640  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:45.828699  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:45.844528  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0916 18:16:45.845029  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:45.845586  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:45.845608  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:45.845945  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:45.846136  397263 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:16:45.849183  397263 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:45.849621  397263 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:16:45.849649  397263 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:45.849837  397263 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:16:45.850249  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:45.850296  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:45.865508  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
	I0916 18:16:45.865978  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:45.866481  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:45.866505  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:45.866823  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:45.867032  397263 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:16:45.867200  397263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:45.867224  397263 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:16:45.869958  397263 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:45.870347  397263 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:16:45.870373  397263 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:45.870490  397263 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:16:45.870696  397263 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:16:45.870876  397263 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:16:45.871073  397263 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:16:45.958825  397263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:45.976363  397263 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:16:45.976404  397263 api_server.go:166] Checking apiserver status ...
	I0916 18:16:45.976455  397263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:16:45.992416  397263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:16:46.002230  397263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:16:46.002301  397263 ssh_runner.go:195] Run: ls
	I0916 18:16:46.007477  397263 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:16:46.014311  397263 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:16:46.014360  397263 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:16:46.014370  397263 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:16:46.014385  397263 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:16:46.014763  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:46.014816  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:46.032446  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0916 18:16:46.033049  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:46.033721  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:46.033750  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:46.034138  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:46.034375  397263 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:16:46.036177  397263 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:16:46.036201  397263 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:16:46.036622  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:46.036716  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:46.052414  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33079
	I0916 18:16:46.052972  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:46.053493  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:46.053514  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:46.053882  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:46.054128  397263 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:16:46.057466  397263 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:46.057984  397263 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:16:46.058012  397263 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:46.058147  397263 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:16:46.058470  397263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:46.058522  397263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:46.074328  397263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45913
	I0916 18:16:46.074829  397263 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:46.075370  397263 main.go:141] libmachine: Using API Version  1
	I0916 18:16:46.075394  397263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:46.075722  397263 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:46.075889  397263 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:16:46.076078  397263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:46.076106  397263 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:16:46.079038  397263 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:46.079537  397263 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:16:46.079563  397263 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:46.079723  397263 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:16:46.079875  397263 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:16:46.079980  397263 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:16:46.080128  397263 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:16:46.165951  397263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:46.184157  397263 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-365438 -n ha-365438
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-365438 logs -n 25: (1.452857202s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438:/home/docker/cp-test_ha-365438-m03_ha-365438.txt                       |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438 sudo cat                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438.txt                                 |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m04 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp testdata/cp-test.txt                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438:/home/docker/cp-test_ha-365438-m04_ha-365438.txt                       |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438 sudo cat                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438.txt                                 |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03:/home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m03 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-365438 node stop m02 -v=7                                                     | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 18:09:45
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 18:09:45.861740  392787 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:09:45.861864  392787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:09:45.861873  392787 out.go:358] Setting ErrFile to fd 2...
	I0916 18:09:45.861876  392787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:09:45.862039  392787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:09:45.862626  392787 out.go:352] Setting JSON to false
	I0916 18:09:45.863602  392787 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6729,"bootTime":1726503457,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:09:45.863708  392787 start.go:139] virtualization: kvm guest
	I0916 18:09:45.865949  392787 out.go:177] * [ha-365438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:09:45.867472  392787 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:09:45.867509  392787 notify.go:220] Checking for updates...
	I0916 18:09:45.870430  392787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:09:45.872039  392787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:09:45.873613  392787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:09:45.875149  392787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:09:45.876420  392787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:09:45.877805  392787 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:09:45.913887  392787 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 18:09:45.915112  392787 start.go:297] selected driver: kvm2
	I0916 18:09:45.915124  392787 start.go:901] validating driver "kvm2" against <nil>
	I0916 18:09:45.915137  392787 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:09:45.915845  392787 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:09:45.915944  392787 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 18:09:45.931147  392787 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 18:09:45.931218  392787 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 18:09:45.931517  392787 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:09:45.931559  392787 cni.go:84] Creating CNI manager for ""
	I0916 18:09:45.931612  392787 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 18:09:45.931620  392787 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 18:09:45.931682  392787 start.go:340] cluster config:
	{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 18:09:45.931778  392787 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:09:45.933943  392787 out.go:177] * Starting "ha-365438" primary control-plane node in "ha-365438" cluster
	I0916 18:09:45.935381  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:09:45.935438  392787 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 18:09:45.935448  392787 cache.go:56] Caching tarball of preloaded images
	I0916 18:09:45.935550  392787 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:09:45.935561  392787 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:09:45.935870  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:09:45.935895  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json: {Name:mkb6c5565eaaa6718155d06cabf91699df9faa1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:09:45.936041  392787 start.go:360] acquireMachinesLock for ha-365438: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:09:45.936069  392787 start.go:364] duration metric: took 15.895µs to acquireMachinesLock for "ha-365438"
	I0916 18:09:45.936085  392787 start.go:93] Provisioning new machine with config: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:09:45.936144  392787 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 18:09:45.937672  392787 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 18:09:45.937824  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:09:45.937874  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:09:45.952974  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0916 18:09:45.953548  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:09:45.954158  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:09:45.954181  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:09:45.954547  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:09:45.954720  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:09:45.954868  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:09:45.955015  392787 start.go:159] libmachine.API.Create for "ha-365438" (driver="kvm2")
	I0916 18:09:45.955048  392787 client.go:168] LocalClient.Create starting
	I0916 18:09:45.955096  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 18:09:45.955136  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:09:45.955157  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:09:45.955234  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 18:09:45.955262  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:09:45.955283  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:09:45.955309  392787 main.go:141] libmachine: Running pre-create checks...
	I0916 18:09:45.955321  392787 main.go:141] libmachine: (ha-365438) Calling .PreCreateCheck
	I0916 18:09:45.955657  392787 main.go:141] libmachine: (ha-365438) Calling .GetConfigRaw
	I0916 18:09:45.956025  392787 main.go:141] libmachine: Creating machine...
	I0916 18:09:45.956040  392787 main.go:141] libmachine: (ha-365438) Calling .Create
	I0916 18:09:45.956186  392787 main.go:141] libmachine: (ha-365438) Creating KVM machine...
	I0916 18:09:45.957461  392787 main.go:141] libmachine: (ha-365438) DBG | found existing default KVM network
	I0916 18:09:45.958151  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:45.958019  392810 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0916 18:09:45.958250  392787 main.go:141] libmachine: (ha-365438) DBG | created network xml: 
	I0916 18:09:45.958269  392787 main.go:141] libmachine: (ha-365438) DBG | <network>
	I0916 18:09:45.958279  392787 main.go:141] libmachine: (ha-365438) DBG |   <name>mk-ha-365438</name>
	I0916 18:09:45.958289  392787 main.go:141] libmachine: (ha-365438) DBG |   <dns enable='no'/>
	I0916 18:09:45.958297  392787 main.go:141] libmachine: (ha-365438) DBG |   
	I0916 18:09:45.958305  392787 main.go:141] libmachine: (ha-365438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 18:09:45.958316  392787 main.go:141] libmachine: (ha-365438) DBG |     <dhcp>
	I0916 18:09:45.958327  392787 main.go:141] libmachine: (ha-365438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 18:09:45.958336  392787 main.go:141] libmachine: (ha-365438) DBG |     </dhcp>
	I0916 18:09:45.958364  392787 main.go:141] libmachine: (ha-365438) DBG |   </ip>
	I0916 18:09:45.958372  392787 main.go:141] libmachine: (ha-365438) DBG |   
	I0916 18:09:45.958376  392787 main.go:141] libmachine: (ha-365438) DBG | </network>
	I0916 18:09:45.958403  392787 main.go:141] libmachine: (ha-365438) DBG | 
	I0916 18:09:45.963564  392787 main.go:141] libmachine: (ha-365438) DBG | trying to create private KVM network mk-ha-365438 192.168.39.0/24...
	I0916 18:09:46.030993  392787 main.go:141] libmachine: (ha-365438) DBG | private KVM network mk-ha-365438 192.168.39.0/24 created
	I0916 18:09:46.031030  392787 main.go:141] libmachine: (ha-365438) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438 ...
	I0916 18:09:46.031043  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.030933  392810 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:09:46.031099  392787 main.go:141] libmachine: (ha-365438) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 18:09:46.031127  392787 main.go:141] libmachine: (ha-365438) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 18:09:46.302314  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.302075  392810 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa...
	I0916 18:09:46.432576  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.432389  392810 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/ha-365438.rawdisk...
	I0916 18:09:46.432634  392787 main.go:141] libmachine: (ha-365438) DBG | Writing magic tar header
	I0916 18:09:46.432653  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438 (perms=drwx------)
	I0916 18:09:46.432673  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 18:09:46.432685  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 18:09:46.432701  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 18:09:46.432717  392787 main.go:141] libmachine: (ha-365438) DBG | Writing SSH key tar header
	I0916 18:09:46.432729  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.432504  392810 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438 ...
	I0916 18:09:46.432766  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438
	I0916 18:09:46.432803  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 18:09:46.432817  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:09:46.432833  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 18:09:46.432850  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 18:09:46.433024  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 18:09:46.433062  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins
	I0916 18:09:46.433085  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 18:09:46.433289  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home
	I0916 18:09:46.433779  392787 main.go:141] libmachine: (ha-365438) Creating domain...
	I0916 18:09:46.433790  392787 main.go:141] libmachine: (ha-365438) DBG | Skipping /home - not owner
	I0916 18:09:46.434962  392787 main.go:141] libmachine: (ha-365438) define libvirt domain using xml: 
	I0916 18:09:46.434983  392787 main.go:141] libmachine: (ha-365438) <domain type='kvm'>
	I0916 18:09:46.434992  392787 main.go:141] libmachine: (ha-365438)   <name>ha-365438</name>
	I0916 18:09:46.434999  392787 main.go:141] libmachine: (ha-365438)   <memory unit='MiB'>2200</memory>
	I0916 18:09:46.435006  392787 main.go:141] libmachine: (ha-365438)   <vcpu>2</vcpu>
	I0916 18:09:46.435027  392787 main.go:141] libmachine: (ha-365438)   <features>
	I0916 18:09:46.435039  392787 main.go:141] libmachine: (ha-365438)     <acpi/>
	I0916 18:09:46.435045  392787 main.go:141] libmachine: (ha-365438)     <apic/>
	I0916 18:09:46.435052  392787 main.go:141] libmachine: (ha-365438)     <pae/>
	I0916 18:09:46.435059  392787 main.go:141] libmachine: (ha-365438)     
	I0916 18:09:46.435078  392787 main.go:141] libmachine: (ha-365438)   </features>
	I0916 18:09:46.435094  392787 main.go:141] libmachine: (ha-365438)   <cpu mode='host-passthrough'>
	I0916 18:09:46.435128  392787 main.go:141] libmachine: (ha-365438)   
	I0916 18:09:46.435159  392787 main.go:141] libmachine: (ha-365438)   </cpu>
	I0916 18:09:46.435168  392787 main.go:141] libmachine: (ha-365438)   <os>
	I0916 18:09:46.435174  392787 main.go:141] libmachine: (ha-365438)     <type>hvm</type>
	I0916 18:09:46.435186  392787 main.go:141] libmachine: (ha-365438)     <boot dev='cdrom'/>
	I0916 18:09:46.435196  392787 main.go:141] libmachine: (ha-365438)     <boot dev='hd'/>
	I0916 18:09:46.435204  392787 main.go:141] libmachine: (ha-365438)     <bootmenu enable='no'/>
	I0916 18:09:46.435210  392787 main.go:141] libmachine: (ha-365438)   </os>
	I0916 18:09:46.435221  392787 main.go:141] libmachine: (ha-365438)   <devices>
	I0916 18:09:46.435231  392787 main.go:141] libmachine: (ha-365438)     <disk type='file' device='cdrom'>
	I0916 18:09:46.435262  392787 main.go:141] libmachine: (ha-365438)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/boot2docker.iso'/>
	I0916 18:09:46.435282  392787 main.go:141] libmachine: (ha-365438)       <target dev='hdc' bus='scsi'/>
	I0916 18:09:46.435293  392787 main.go:141] libmachine: (ha-365438)       <readonly/>
	I0916 18:09:46.435303  392787 main.go:141] libmachine: (ha-365438)     </disk>
	I0916 18:09:46.435313  392787 main.go:141] libmachine: (ha-365438)     <disk type='file' device='disk'>
	I0916 18:09:46.435325  392787 main.go:141] libmachine: (ha-365438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 18:09:46.435341  392787 main.go:141] libmachine: (ha-365438)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/ha-365438.rawdisk'/>
	I0916 18:09:46.435348  392787 main.go:141] libmachine: (ha-365438)       <target dev='hda' bus='virtio'/>
	I0916 18:09:46.435355  392787 main.go:141] libmachine: (ha-365438)     </disk>
	I0916 18:09:46.435362  392787 main.go:141] libmachine: (ha-365438)     <interface type='network'>
	I0916 18:09:46.435372  392787 main.go:141] libmachine: (ha-365438)       <source network='mk-ha-365438'/>
	I0916 18:09:46.435380  392787 main.go:141] libmachine: (ha-365438)       <model type='virtio'/>
	I0916 18:09:46.435391  392787 main.go:141] libmachine: (ha-365438)     </interface>
	I0916 18:09:46.435401  392787 main.go:141] libmachine: (ha-365438)     <interface type='network'>
	I0916 18:09:46.435423  392787 main.go:141] libmachine: (ha-365438)       <source network='default'/>
	I0916 18:09:46.435444  392787 main.go:141] libmachine: (ha-365438)       <model type='virtio'/>
	I0916 18:09:46.435456  392787 main.go:141] libmachine: (ha-365438)     </interface>
	I0916 18:09:46.435463  392787 main.go:141] libmachine: (ha-365438)     <serial type='pty'>
	I0916 18:09:46.435474  392787 main.go:141] libmachine: (ha-365438)       <target port='0'/>
	I0916 18:09:46.435482  392787 main.go:141] libmachine: (ha-365438)     </serial>
	I0916 18:09:46.435493  392787 main.go:141] libmachine: (ha-365438)     <console type='pty'>
	I0916 18:09:46.435503  392787 main.go:141] libmachine: (ha-365438)       <target type='serial' port='0'/>
	I0916 18:09:46.435515  392787 main.go:141] libmachine: (ha-365438)     </console>
	I0916 18:09:46.435530  392787 main.go:141] libmachine: (ha-365438)     <rng model='virtio'>
	I0916 18:09:46.435545  392787 main.go:141] libmachine: (ha-365438)       <backend model='random'>/dev/random</backend>
	I0916 18:09:46.435555  392787 main.go:141] libmachine: (ha-365438)     </rng>
	I0916 18:09:46.435564  392787 main.go:141] libmachine: (ha-365438)     
	I0916 18:09:46.435575  392787 main.go:141] libmachine: (ha-365438)     
	I0916 18:09:46.435584  392787 main.go:141] libmachine: (ha-365438)   </devices>
	I0916 18:09:46.435601  392787 main.go:141] libmachine: (ha-365438) </domain>
	I0916 18:09:46.435610  392787 main.go:141] libmachine: (ha-365438) 
	I0916 18:09:46.439784  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:2c:d8:d4 in network default
	I0916 18:09:46.440296  392787 main.go:141] libmachine: (ha-365438) Ensuring networks are active...
	I0916 18:09:46.440318  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:46.441001  392787 main.go:141] libmachine: (ha-365438) Ensuring network default is active
	I0916 18:09:46.441405  392787 main.go:141] libmachine: (ha-365438) Ensuring network mk-ha-365438 is active
	I0916 18:09:46.442094  392787 main.go:141] libmachine: (ha-365438) Getting domain xml...
	I0916 18:09:46.442842  392787 main.go:141] libmachine: (ha-365438) Creating domain...
	I0916 18:09:47.648947  392787 main.go:141] libmachine: (ha-365438) Waiting to get IP...
	I0916 18:09:47.649856  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:47.650278  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:47.650334  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:47.650266  392810 retry.go:31] will retry after 283.520836ms: waiting for machine to come up
	I0916 18:09:47.935866  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:47.936176  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:47.936236  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:47.936159  392810 retry.go:31] will retry after 297.837185ms: waiting for machine to come up
	I0916 18:09:48.235774  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:48.236190  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:48.236212  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:48.236162  392810 retry.go:31] will retry after 462.816213ms: waiting for machine to come up
	I0916 18:09:48.700878  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:48.701324  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:48.701351  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:48.701273  392810 retry.go:31] will retry after 370.07957ms: waiting for machine to come up
	I0916 18:09:49.072759  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:49.073273  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:49.073320  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:49.073248  392810 retry.go:31] will retry after 688.41688ms: waiting for machine to come up
	I0916 18:09:49.763134  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:49.763556  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:49.763584  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:49.763508  392810 retry.go:31] will retry after 795.125241ms: waiting for machine to come up
	I0916 18:09:50.560100  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:50.560622  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:50.560665  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:50.560550  392810 retry.go:31] will retry after 715.844297ms: waiting for machine to come up
	I0916 18:09:51.278294  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:51.278728  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:51.278756  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:51.278686  392810 retry.go:31] will retry after 1.137561072s: waiting for machine to come up
	I0916 18:09:52.417546  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:52.417920  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:52.417944  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:52.417885  392810 retry.go:31] will retry after 1.728480138s: waiting for machine to come up
	I0916 18:09:54.148897  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:54.149250  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:54.149280  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:54.149227  392810 retry.go:31] will retry after 1.540936278s: waiting for machine to come up
	I0916 18:09:55.691955  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:55.692373  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:55.692398  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:55.692323  392810 retry.go:31] will retry after 2.060258167s: waiting for machine to come up
	I0916 18:09:57.754937  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:57.755410  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:57.755438  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:57.755358  392810 retry.go:31] will retry after 2.807471229s: waiting for machine to come up
	I0916 18:10:00.566328  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:00.566758  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:10:00.566785  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:10:00.566704  392810 retry.go:31] will retry after 2.874102784s: waiting for machine to come up
	I0916 18:10:03.444413  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:03.444863  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:10:03.444895  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:10:03.444763  392810 retry.go:31] will retry after 5.017111787s: waiting for machine to come up
	I0916 18:10:08.465292  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.465900  392787 main.go:141] libmachine: (ha-365438) Found IP for machine: 192.168.39.165
	I0916 18:10:08.465929  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has current primary IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.465935  392787 main.go:141] libmachine: (ha-365438) Reserving static IP address...
	I0916 18:10:08.466341  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find host DHCP lease matching {name: "ha-365438", mac: "52:54:00:aa:6c:bf", ip: "192.168.39.165"} in network mk-ha-365438
	I0916 18:10:08.541019  392787 main.go:141] libmachine: (ha-365438) DBG | Getting to WaitForSSH function...
	I0916 18:10:08.541056  392787 main.go:141] libmachine: (ha-365438) Reserved static IP address: 192.168.39.165
	I0916 18:10:08.541070  392787 main.go:141] libmachine: (ha-365438) Waiting for SSH to be available...
	I0916 18:10:08.543538  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.543895  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.543923  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.544095  392787 main.go:141] libmachine: (ha-365438) DBG | Using SSH client type: external
	I0916 18:10:08.544122  392787 main.go:141] libmachine: (ha-365438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa (-rw-------)
	I0916 18:10:08.544168  392787 main.go:141] libmachine: (ha-365438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 18:10:08.544186  392787 main.go:141] libmachine: (ha-365438) DBG | About to run SSH command:
	I0916 18:10:08.544200  392787 main.go:141] libmachine: (ha-365438) DBG | exit 0
	I0916 18:10:08.669263  392787 main.go:141] libmachine: (ha-365438) DBG | SSH cmd err, output: <nil>: 
	I0916 18:10:08.669543  392787 main.go:141] libmachine: (ha-365438) KVM machine creation complete!
	I0916 18:10:08.669922  392787 main.go:141] libmachine: (ha-365438) Calling .GetConfigRaw
	I0916 18:10:08.670493  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:08.670676  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:08.670858  392787 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 18:10:08.670873  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:08.672073  392787 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 18:10:08.672084  392787 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 18:10:08.672089  392787 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 18:10:08.672094  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:08.674253  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.674595  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.674621  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.674775  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:08.674931  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.675052  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.675159  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:08.675291  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:08.675499  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:08.675513  392787 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 18:10:08.784730  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:08.784756  392787 main.go:141] libmachine: Detecting the provisioner...
	I0916 18:10:08.784765  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:08.787646  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.787961  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.787988  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.788205  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:08.788435  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.788617  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.788756  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:08.788961  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:08.789182  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:08.789200  392787 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 18:10:08.897712  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 18:10:08.897775  392787 main.go:141] libmachine: found compatible host: buildroot
	I0916 18:10:08.897782  392787 main.go:141] libmachine: Provisioning with buildroot...
	I0916 18:10:08.897789  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:10:08.898041  392787 buildroot.go:166] provisioning hostname "ha-365438"
	I0916 18:10:08.898070  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:10:08.898265  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:08.900576  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.901066  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.901098  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.901253  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:08.901446  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.901645  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.901751  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:08.901927  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:08.902111  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:08.902122  392787 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438 && echo "ha-365438" | sudo tee /etc/hostname
	I0916 18:10:09.024770  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438
	
	I0916 18:10:09.024806  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.027664  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.027985  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.028009  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.028250  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.028462  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.028647  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.028784  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.029008  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:09.029184  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:09.029199  392787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:10:09.148460  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:09.148498  392787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:10:09.148553  392787 buildroot.go:174] setting up certificates
	I0916 18:10:09.148565  392787 provision.go:84] configureAuth start
	I0916 18:10:09.148578  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:10:09.148870  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:09.151619  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.151998  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.152025  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.152184  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.154538  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.154865  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.154889  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.155061  392787 provision.go:143] copyHostCerts
	I0916 18:10:09.155093  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:09.155127  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:10:09.155138  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:09.155205  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:10:09.155296  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:09.155313  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:10:09.155320  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:09.155343  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:10:09.155400  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:09.155417  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:10:09.155426  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:09.155446  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:10:09.155511  392787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438 san=[127.0.0.1 192.168.39.165 ha-365438 localhost minikube]
	I0916 18:10:09.255332  392787 provision.go:177] copyRemoteCerts
	I0916 18:10:09.255403  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:10:09.255437  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.258231  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.258551  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.258577  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.258711  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.258908  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.259042  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.259151  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:09.344339  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:10:09.344416  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 18:10:09.369182  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:10:09.369258  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:10:09.394472  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:10:09.394552  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 18:10:09.419548  392787 provision.go:87] duration metric: took 270.959045ms to configureAuth
	I0916 18:10:09.419586  392787 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:10:09.419837  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:09.419933  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.422595  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.422966  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.422993  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.423176  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.423397  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.423637  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.423798  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.423944  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:09.424166  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:09.424182  392787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:10:09.649181  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:10:09.649215  392787 main.go:141] libmachine: Checking connection to Docker...
	I0916 18:10:09.649240  392787 main.go:141] libmachine: (ha-365438) Calling .GetURL
	I0916 18:10:09.650612  392787 main.go:141] libmachine: (ha-365438) DBG | Using libvirt version 6000000
	I0916 18:10:09.652753  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.653207  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.653278  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.653396  392787 main.go:141] libmachine: Docker is up and running!
	I0916 18:10:09.653409  392787 main.go:141] libmachine: Reticulating splines...
	I0916 18:10:09.653416  392787 client.go:171] duration metric: took 23.698357841s to LocalClient.Create
	I0916 18:10:09.653440  392787 start.go:167] duration metric: took 23.698426057s to libmachine.API.Create "ha-365438"
	I0916 18:10:09.653449  392787 start.go:293] postStartSetup for "ha-365438" (driver="kvm2")
	I0916 18:10:09.653459  392787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:10:09.653477  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.653791  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:10:09.653826  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.656119  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.656574  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.656599  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.656723  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.656904  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.657095  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.657220  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:09.744116  392787 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:10:09.748447  392787 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:10:09.748477  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:10:09.748543  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:10:09.748666  392787 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:10:09.748684  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:10:09.748800  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:10:09.758575  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:09.783524  392787 start.go:296] duration metric: took 130.056288ms for postStartSetup
	I0916 18:10:09.783612  392787 main.go:141] libmachine: (ha-365438) Calling .GetConfigRaw
	I0916 18:10:09.784359  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:09.786896  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.787272  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.787302  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.787596  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:09.787817  392787 start.go:128] duration metric: took 23.851663044s to createHost
	I0916 18:10:09.787843  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.790222  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.790469  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.790492  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.790649  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.790844  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.791032  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.791191  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.791344  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:09.791541  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:09.791559  392787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:10:09.902572  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510209.877644731
	
	I0916 18:10:09.902626  392787 fix.go:216] guest clock: 1726510209.877644731
	I0916 18:10:09.902638  392787 fix.go:229] Guest: 2024-09-16 18:10:09.877644731 +0000 UTC Remote: 2024-09-16 18:10:09.787831605 +0000 UTC m=+23.962305313 (delta=89.813126ms)
	I0916 18:10:09.902671  392787 fix.go:200] guest clock delta is within tolerance: 89.813126ms
	I0916 18:10:09.902683  392787 start.go:83] releasing machines lock for "ha-365438", held for 23.966604338s
	I0916 18:10:09.902714  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.902983  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:09.905268  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.905547  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.905589  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.905696  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.906225  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.906452  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.906551  392787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:10:09.906603  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.906638  392787 ssh_runner.go:195] Run: cat /version.json
	I0916 18:10:09.906665  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.909274  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909303  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909658  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.909702  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.909727  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909808  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909859  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.910046  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.910048  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.910237  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.910248  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.910457  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.910445  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:09.910571  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:10.020506  392787 ssh_runner.go:195] Run: systemctl --version
	I0916 18:10:10.026746  392787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:10:10.186605  392787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:10:10.192998  392787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:10:10.193074  392787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:10:10.210382  392787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 18:10:10.210412  392787 start.go:495] detecting cgroup driver to use...
	I0916 18:10:10.210482  392787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:10:10.227369  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:10:10.242414  392787 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:10:10.242485  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:10:10.257131  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:10:10.271966  392787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:10:10.391099  392787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:10:10.572487  392787 docker.go:233] disabling docker service ...
	I0916 18:10:10.572566  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:10:10.588966  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:10:10.601981  392787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:10:10.740636  392787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:10:10.878326  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:10:10.892590  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:10:10.911709  392787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:10:10.911775  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.922389  392787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:10:10.922465  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.933274  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.944462  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.955915  392787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:10:10.967551  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.979310  392787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.998237  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:11.009805  392787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:10:11.019885  392787 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 18:10:11.019951  392787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 18:10:11.033562  392787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:10:11.044563  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:10:11.172744  392787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:10:11.271253  392787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:10:11.271339  392787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:10:11.276484  392787 start.go:563] Will wait 60s for crictl version
	I0916 18:10:11.276555  392787 ssh_runner.go:195] Run: which crictl
	I0916 18:10:11.280518  392787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:10:11.321488  392787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:10:11.321594  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:11.350882  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:11.381527  392787 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:10:11.382847  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:11.385449  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:11.385812  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:11.385839  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:11.386079  392787 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:10:11.390612  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:10:11.406408  392787 kubeadm.go:883] updating cluster {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 18:10:11.406535  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:10:11.406590  392787 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:10:11.447200  392787 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 18:10:11.447268  392787 ssh_runner.go:195] Run: which lz4
	I0916 18:10:11.451561  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 18:10:11.451682  392787 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 18:10:11.456239  392787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 18:10:11.456268  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 18:10:12.876494  392787 crio.go:462] duration metric: took 1.4248413s to copy over tarball
	I0916 18:10:12.876584  392787 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 18:10:14.935900  392787 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059286389s)
	I0916 18:10:14.935940  392787 crio.go:469] duration metric: took 2.059412063s to extract the tarball
	I0916 18:10:14.935951  392787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 18:10:14.973313  392787 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:10:15.019757  392787 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:10:15.019785  392787 cache_images.go:84] Images are preloaded, skipping loading
	I0916 18:10:15.019793  392787 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.31.1 crio true true} ...
	I0916 18:10:15.019895  392787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:10:15.019965  392787 ssh_runner.go:195] Run: crio config
	I0916 18:10:15.074859  392787 cni.go:84] Creating CNI manager for ""
	I0916 18:10:15.074884  392787 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 18:10:15.074896  392787 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 18:10:15.074922  392787 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-365438 NodeName:ha-365438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 18:10:15.075071  392787 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-365438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 18:10:15.075097  392787 kube-vip.go:115] generating kube-vip config ...
	I0916 18:10:15.075140  392787 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:10:15.093642  392787 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:10:15.093768  392787 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:10:15.093826  392787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:10:15.104325  392787 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 18:10:15.104413  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 18:10:15.115282  392787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0916 18:10:15.133359  392787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:10:15.151228  392787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0916 18:10:15.169219  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 18:10:15.187557  392787 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:10:15.192161  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:10:15.206388  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:10:15.342949  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:10:15.359967  392787 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.165
	I0916 18:10:15.359996  392787 certs.go:194] generating shared ca certs ...
	I0916 18:10:15.360015  392787 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.360194  392787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:10:15.360258  392787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:10:15.360273  392787 certs.go:256] generating profile certs ...
	I0916 18:10:15.360337  392787 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:10:15.360373  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt with IP's: []
	I0916 18:10:15.551306  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt ...
	I0916 18:10:15.551342  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt: {Name:mkc3db8b1101003a3b29c04d7b8c9aeb779fd32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.551543  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key ...
	I0916 18:10:15.551560  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key: {Name:mk23aeda90888d0044ea468a8c24dd15a14c193f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.551673  392787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa
	I0916 18:10:15.551692  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.254]
	I0916 18:10:15.656888  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa ...
	I0916 18:10:15.656947  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa: {Name:mke35516cd8bcea2b1e4bff6c9e1c4b746bd51cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.657136  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa ...
	I0916 18:10:15.657154  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa: {Name:mk67396fd6a5e04a27321be953e22e674a4f06bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.657257  392787 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:10:15.657356  392787 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:10:15.657460  392787 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:10:15.657481  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt with IP's: []
	I0916 18:10:15.940352  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt ...
	I0916 18:10:15.940389  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt: {Name:mke3aeb0e02e8ca7bf96d4b2cba27ef685c7b48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.940580  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key ...
	I0916 18:10:15.940595  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key: {Name:mkb18a35b9920b50dca88235e28388a5820fbec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.940690  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:10:15.940713  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:10:15.940729  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:10:15.940750  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:10:15.940767  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:10:15.940790  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:10:15.940808  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:10:15.940838  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:10:15.940906  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:10:15.940975  392787 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:10:15.940990  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:10:15.941028  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:10:15.941072  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:10:15.941108  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:10:15.941164  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:15.941204  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:15.941225  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:10:15.941243  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:10:15.941910  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:10:15.968962  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:10:15.994450  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:10:16.020858  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:10:16.048300  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 18:10:16.074581  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 18:10:16.100221  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:10:16.125842  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:10:16.154371  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:10:16.179696  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:10:16.214450  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:10:16.240446  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 18:10:16.259949  392787 ssh_runner.go:195] Run: openssl version
	I0916 18:10:16.266188  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:10:16.278092  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:16.283093  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:16.283168  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:16.289592  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:10:16.301039  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:10:16.312436  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:10:16.317338  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:10:16.317443  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:10:16.323451  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:10:16.334583  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:10:16.346904  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:10:16.351957  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:10:16.352006  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:10:16.358300  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:10:16.370577  392787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:10:16.375213  392787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 18:10:16.375275  392787 kubeadm.go:392] StartCluster: {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:10:16.375380  392787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 18:10:16.375457  392787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 18:10:16.418815  392787 cri.go:89] found id: ""
	I0916 18:10:16.418883  392787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 18:10:16.429042  392787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 18:10:16.439116  392787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 18:10:16.448909  392787 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 18:10:16.448955  392787 kubeadm.go:157] found existing configuration files:
	
	I0916 18:10:16.449017  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 18:10:16.457939  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 18:10:16.457999  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 18:10:16.469172  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 18:10:16.478337  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 18:10:16.478410  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 18:10:16.489316  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 18:10:16.499123  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 18:10:16.499183  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 18:10:16.509331  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 18:10:16.519711  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 18:10:16.519778  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 18:10:16.529881  392787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 18:10:16.641425  392787 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 18:10:16.641531  392787 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 18:10:16.740380  392787 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 18:10:16.740525  392787 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 18:10:16.740686  392787 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 18:10:16.760499  392787 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 18:10:16.920961  392787 out.go:235]   - Generating certificates and keys ...
	I0916 18:10:16.921100  392787 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 18:10:16.921171  392787 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 18:10:16.998342  392787 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 18:10:17.125003  392787 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 18:10:17.361090  392787 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 18:10:17.742955  392787 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 18:10:17.849209  392787 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 18:10:17.849413  392787 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-365438 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0916 18:10:17.928825  392787 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 18:10:17.929089  392787 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-365438 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0916 18:10:18.075649  392787 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 18:10:18.204742  392787 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 18:10:18.245512  392787 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 18:10:18.245734  392787 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 18:10:18.659010  392787 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 18:10:18.872130  392787 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 18:10:18.929814  392787 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 18:10:19.311882  392787 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 18:10:19.409886  392787 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 18:10:19.410721  392787 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 18:10:19.414179  392787 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 18:10:19.514491  392787 out.go:235]   - Booting up control plane ...
	I0916 18:10:19.514679  392787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 18:10:19.514817  392787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 18:10:19.514921  392787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 18:10:19.515110  392787 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 18:10:19.515272  392787 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 18:10:19.515350  392787 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 18:10:19.589659  392787 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 18:10:19.589838  392787 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 18:10:20.589849  392787 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001469256s
	I0916 18:10:20.589940  392787 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 18:10:26.435717  392787 kubeadm.go:310] [api-check] The API server is healthy after 5.848884759s
	I0916 18:10:26.453718  392787 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 18:10:26.466069  392787 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 18:10:26.493083  392787 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 18:10:26.493369  392787 kubeadm.go:310] [mark-control-plane] Marking the node ha-365438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 18:10:26.506186  392787 kubeadm.go:310] [bootstrap-token] Using token: tw4zgl.f8vkt3x516r20x53
	I0916 18:10:26.507638  392787 out.go:235]   - Configuring RBAC rules ...
	I0916 18:10:26.507809  392787 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 18:10:26.517833  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 18:10:26.531746  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 18:10:26.537095  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 18:10:26.541072  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 18:10:26.548789  392787 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 18:10:26.844266  392787 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 18:10:27.277806  392787 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 18:10:27.842459  392787 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 18:10:27.844557  392787 kubeadm.go:310] 
	I0916 18:10:27.844677  392787 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 18:10:27.844691  392787 kubeadm.go:310] 
	I0916 18:10:27.844823  392787 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 18:10:27.844840  392787 kubeadm.go:310] 
	I0916 18:10:27.844874  392787 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 18:10:27.844971  392787 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 18:10:27.845041  392787 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 18:10:27.845051  392787 kubeadm.go:310] 
	I0916 18:10:27.845124  392787 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 18:10:27.845133  392787 kubeadm.go:310] 
	I0916 18:10:27.845194  392787 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 18:10:27.845204  392787 kubeadm.go:310] 
	I0916 18:10:27.845299  392787 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 18:10:27.845432  392787 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 18:10:27.845516  392787 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 18:10:27.845523  392787 kubeadm.go:310] 
	I0916 18:10:27.845646  392787 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 18:10:27.845731  392787 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 18:10:27.845738  392787 kubeadm.go:310] 
	I0916 18:10:27.845815  392787 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tw4zgl.f8vkt3x516r20x53 \
	I0916 18:10:27.845905  392787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e \
	I0916 18:10:27.845926  392787 kubeadm.go:310] 	--control-plane 
	I0916 18:10:27.845929  392787 kubeadm.go:310] 
	I0916 18:10:27.845998  392787 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 18:10:27.846003  392787 kubeadm.go:310] 
	I0916 18:10:27.846070  392787 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tw4zgl.f8vkt3x516r20x53 \
	I0916 18:10:27.846176  392787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e 
	I0916 18:10:27.848408  392787 kubeadm.go:310] W0916 18:10:16.621020     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 18:10:27.848816  392787 kubeadm.go:310] W0916 18:10:16.621804     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 18:10:27.848993  392787 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 18:10:27.849055  392787 cni.go:84] Creating CNI manager for ""
	I0916 18:10:27.849070  392787 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 18:10:27.851663  392787 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 18:10:27.853746  392787 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 18:10:27.860026  392787 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 18:10:27.860053  392787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 18:10:27.881459  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 18:10:28.293048  392787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 18:10:28.293098  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:28.293102  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-365438 minikube.k8s.io/updated_at=2024_09_16T18_10_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=ha-365438 minikube.k8s.io/primary=true
	I0916 18:10:28.494326  392787 ops.go:34] apiserver oom_adj: -16
	I0916 18:10:28.494487  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:28.995554  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:29.494840  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:29.994554  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:30.494592  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:30.994910  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:31.495575  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:31.994604  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:32.184635  392787 kubeadm.go:1113] duration metric: took 3.891601216s to wait for elevateKubeSystemPrivileges
	I0916 18:10:32.184688  392787 kubeadm.go:394] duration metric: took 15.809420067s to StartCluster
	I0916 18:10:32.184718  392787 settings.go:142] acquiring lock: {Name:mk9af1b5fb868180f97a2648a387fb06c7d5fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:32.184834  392787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:10:32.185867  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/kubeconfig: {Name:mk8f19e4e61aad6cdecf3a2028815277e5ffb248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:32.186174  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 18:10:32.186174  392787 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:10:32.186202  392787 start.go:241] waiting for startup goroutines ...
	I0916 18:10:32.186221  392787 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 18:10:32.186312  392787 addons.go:69] Setting storage-provisioner=true in profile "ha-365438"
	I0916 18:10:32.186334  392787 addons.go:234] Setting addon storage-provisioner=true in "ha-365438"
	I0916 18:10:32.186390  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:10:32.186333  392787 addons.go:69] Setting default-storageclass=true in profile "ha-365438"
	I0916 18:10:32.186447  392787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-365438"
	I0916 18:10:32.186489  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:32.186867  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.186891  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.186924  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.187014  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.203255  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0916 18:10:32.203442  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0916 18:10:32.203855  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.203908  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.204477  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.204514  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.204633  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.204660  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.204976  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.205035  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.205232  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:32.205522  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.205573  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.207426  392787 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:10:32.207825  392787 kapi.go:59] client config for ha-365438: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt", KeyFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key", CAFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 18:10:32.208405  392787 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 18:10:32.208723  392787 addons.go:234] Setting addon default-storageclass=true in "ha-365438"
	I0916 18:10:32.208776  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:10:32.209194  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.209243  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.222111  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0916 18:10:32.222679  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.223233  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.223266  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.223690  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.223910  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:32.225573  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0916 18:10:32.225894  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:32.226036  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.226457  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.226474  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.226776  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.227398  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.227445  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.228101  392787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 18:10:32.229529  392787 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 18:10:32.229551  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 18:10:32.229573  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:32.232705  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.232894  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:32.232960  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.233073  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:32.233317  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:32.233491  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:32.233658  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:32.243577  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39127
	I0916 18:10:32.244041  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.244603  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.244643  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.245037  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.245260  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:32.247169  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:32.247400  392787 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 18:10:32.247421  392787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 18:10:32.247445  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:32.250674  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.251107  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:32.251138  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.251306  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:32.251487  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:32.251614  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:32.251722  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:32.417097  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 18:10:32.437995  392787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 18:10:32.439876  392787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 18:10:33.053402  392787 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 18:10:33.053505  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.053531  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.053838  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.053851  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.053860  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.053866  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.054145  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.054163  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.054180  392787 main.go:141] libmachine: (ha-365438) DBG | Closing plugin on server side
	I0916 18:10:33.054230  392787 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 18:10:33.054249  392787 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 18:10:33.054345  392787 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 18:10:33.054354  392787 round_trippers.go:469] Request Headers:
	I0916 18:10:33.054364  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:10:33.054372  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:10:33.063977  392787 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 18:10:33.064590  392787 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 18:10:33.064605  392787 round_trippers.go:469] Request Headers:
	I0916 18:10:33.064612  392787 round_trippers.go:473]     Content-Type: application/json
	I0916 18:10:33.064625  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:10:33.064628  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:10:33.067585  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:10:33.067787  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.067804  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.068116  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.068138  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.068154  392787 main.go:141] libmachine: (ha-365438) DBG | Closing plugin on server side
	I0916 18:10:33.314618  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.314651  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.315003  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.315062  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.315081  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.315088  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.315110  392787 main.go:141] libmachine: (ha-365438) DBG | Closing plugin on server side
	I0916 18:10:33.315385  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.315403  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.317232  392787 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 18:10:33.318799  392787 addons.go:510] duration metric: took 1.132579143s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 18:10:33.318848  392787 start.go:246] waiting for cluster config update ...
	I0916 18:10:33.318865  392787 start.go:255] writing updated cluster config ...
	I0916 18:10:33.320826  392787 out.go:201] 
	I0916 18:10:33.322359  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:33.322461  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:33.324743  392787 out.go:177] * Starting "ha-365438-m02" control-plane node in "ha-365438" cluster
	I0916 18:10:33.326567  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:10:33.326599  392787 cache.go:56] Caching tarball of preloaded images
	I0916 18:10:33.326724  392787 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:10:33.326741  392787 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:10:33.326828  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:33.327285  392787 start.go:360] acquireMachinesLock for ha-365438-m02: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:10:33.327372  392787 start.go:364] duration metric: took 64.213µs to acquireMachinesLock for "ha-365438-m02"
	I0916 18:10:33.327391  392787 start.go:93] Provisioning new machine with config: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:10:33.327457  392787 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 18:10:33.329287  392787 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 18:10:33.329421  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:33.329482  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:33.344726  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0916 18:10:33.345292  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:33.345856  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:33.345885  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:33.346250  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:33.346458  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:33.346654  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:33.346828  392787 start.go:159] libmachine.API.Create for "ha-365438" (driver="kvm2")
	I0916 18:10:33.346884  392787 client.go:168] LocalClient.Create starting
	I0916 18:10:33.346999  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 18:10:33.347057  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:10:33.347081  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:10:33.347151  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 18:10:33.347178  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:10:33.347194  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:10:33.347217  392787 main.go:141] libmachine: Running pre-create checks...
	I0916 18:10:33.347228  392787 main.go:141] libmachine: (ha-365438-m02) Calling .PreCreateCheck
	I0916 18:10:33.347425  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetConfigRaw
	I0916 18:10:33.347823  392787 main.go:141] libmachine: Creating machine...
	I0916 18:10:33.347840  392787 main.go:141] libmachine: (ha-365438-m02) Calling .Create
	I0916 18:10:33.348010  392787 main.go:141] libmachine: (ha-365438-m02) Creating KVM machine...
	I0916 18:10:33.349416  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found existing default KVM network
	I0916 18:10:33.349576  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found existing private KVM network mk-ha-365438
	I0916 18:10:33.349710  392787 main.go:141] libmachine: (ha-365438-m02) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02 ...
	I0916 18:10:33.349734  392787 main.go:141] libmachine: (ha-365438-m02) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 18:10:33.349860  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.349743  393164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:10:33.349954  392787 main.go:141] libmachine: (ha-365438-m02) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 18:10:33.622442  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.622279  393164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa...
	I0916 18:10:33.683496  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.683324  393164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/ha-365438-m02.rawdisk...
	I0916 18:10:33.683530  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Writing magic tar header
	I0916 18:10:33.683544  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Writing SSH key tar header
	I0916 18:10:33.683554  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.683451  393164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02 ...
	I0916 18:10:33.683578  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02
	I0916 18:10:33.683589  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02 (perms=drwx------)
	I0916 18:10:33.683599  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 18:10:33.683613  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:10:33.683636  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 18:10:33.683649  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 18:10:33.683666  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 18:10:33.683677  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 18:10:33.683689  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 18:10:33.683703  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 18:10:33.683715  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home
	I0916 18:10:33.683729  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 18:10:33.683743  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 18:10:33.683753  392787 main.go:141] libmachine: (ha-365438-m02) Creating domain...
	I0916 18:10:33.683764  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Skipping /home - not owner
	I0916 18:10:33.684740  392787 main.go:141] libmachine: (ha-365438-m02) define libvirt domain using xml: 
	I0916 18:10:33.684771  392787 main.go:141] libmachine: (ha-365438-m02) <domain type='kvm'>
	I0916 18:10:33.684783  392787 main.go:141] libmachine: (ha-365438-m02)   <name>ha-365438-m02</name>
	I0916 18:10:33.684795  392787 main.go:141] libmachine: (ha-365438-m02)   <memory unit='MiB'>2200</memory>
	I0916 18:10:33.684803  392787 main.go:141] libmachine: (ha-365438-m02)   <vcpu>2</vcpu>
	I0916 18:10:33.684807  392787 main.go:141] libmachine: (ha-365438-m02)   <features>
	I0916 18:10:33.684815  392787 main.go:141] libmachine: (ha-365438-m02)     <acpi/>
	I0916 18:10:33.684819  392787 main.go:141] libmachine: (ha-365438-m02)     <apic/>
	I0916 18:10:33.684824  392787 main.go:141] libmachine: (ha-365438-m02)     <pae/>
	I0916 18:10:33.684827  392787 main.go:141] libmachine: (ha-365438-m02)     
	I0916 18:10:33.684832  392787 main.go:141] libmachine: (ha-365438-m02)   </features>
	I0916 18:10:33.684837  392787 main.go:141] libmachine: (ha-365438-m02)   <cpu mode='host-passthrough'>
	I0916 18:10:33.684843  392787 main.go:141] libmachine: (ha-365438-m02)   
	I0916 18:10:33.684847  392787 main.go:141] libmachine: (ha-365438-m02)   </cpu>
	I0916 18:10:33.684854  392787 main.go:141] libmachine: (ha-365438-m02)   <os>
	I0916 18:10:33.684858  392787 main.go:141] libmachine: (ha-365438-m02)     <type>hvm</type>
	I0916 18:10:33.684896  392787 main.go:141] libmachine: (ha-365438-m02)     <boot dev='cdrom'/>
	I0916 18:10:33.684934  392787 main.go:141] libmachine: (ha-365438-m02)     <boot dev='hd'/>
	I0916 18:10:33.684950  392787 main.go:141] libmachine: (ha-365438-m02)     <bootmenu enable='no'/>
	I0916 18:10:33.684959  392787 main.go:141] libmachine: (ha-365438-m02)   </os>
	I0916 18:10:33.684968  392787 main.go:141] libmachine: (ha-365438-m02)   <devices>
	I0916 18:10:33.684979  392787 main.go:141] libmachine: (ha-365438-m02)     <disk type='file' device='cdrom'>
	I0916 18:10:33.684994  392787 main.go:141] libmachine: (ha-365438-m02)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/boot2docker.iso'/>
	I0916 18:10:33.685005  392787 main.go:141] libmachine: (ha-365438-m02)       <target dev='hdc' bus='scsi'/>
	I0916 18:10:33.685016  392787 main.go:141] libmachine: (ha-365438-m02)       <readonly/>
	I0916 18:10:33.685031  392787 main.go:141] libmachine: (ha-365438-m02)     </disk>
	I0916 18:10:33.685047  392787 main.go:141] libmachine: (ha-365438-m02)     <disk type='file' device='disk'>
	I0916 18:10:33.685066  392787 main.go:141] libmachine: (ha-365438-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 18:10:33.685082  392787 main.go:141] libmachine: (ha-365438-m02)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/ha-365438-m02.rawdisk'/>
	I0916 18:10:33.685094  392787 main.go:141] libmachine: (ha-365438-m02)       <target dev='hda' bus='virtio'/>
	I0916 18:10:33.685103  392787 main.go:141] libmachine: (ha-365438-m02)     </disk>
	I0916 18:10:33.685113  392787 main.go:141] libmachine: (ha-365438-m02)     <interface type='network'>
	I0916 18:10:33.685126  392787 main.go:141] libmachine: (ha-365438-m02)       <source network='mk-ha-365438'/>
	I0916 18:10:33.685138  392787 main.go:141] libmachine: (ha-365438-m02)       <model type='virtio'/>
	I0916 18:10:33.685145  392787 main.go:141] libmachine: (ha-365438-m02)     </interface>
	I0916 18:10:33.685156  392787 main.go:141] libmachine: (ha-365438-m02)     <interface type='network'>
	I0916 18:10:33.685166  392787 main.go:141] libmachine: (ha-365438-m02)       <source network='default'/>
	I0916 18:10:33.685177  392787 main.go:141] libmachine: (ha-365438-m02)       <model type='virtio'/>
	I0916 18:10:33.685186  392787 main.go:141] libmachine: (ha-365438-m02)     </interface>
	I0916 18:10:33.685208  392787 main.go:141] libmachine: (ha-365438-m02)     <serial type='pty'>
	I0916 18:10:33.685225  392787 main.go:141] libmachine: (ha-365438-m02)       <target port='0'/>
	I0916 18:10:33.685237  392787 main.go:141] libmachine: (ha-365438-m02)     </serial>
	I0916 18:10:33.685243  392787 main.go:141] libmachine: (ha-365438-m02)     <console type='pty'>
	I0916 18:10:33.685255  392787 main.go:141] libmachine: (ha-365438-m02)       <target type='serial' port='0'/>
	I0916 18:10:33.685262  392787 main.go:141] libmachine: (ha-365438-m02)     </console>
	I0916 18:10:33.685269  392787 main.go:141] libmachine: (ha-365438-m02)     <rng model='virtio'>
	I0916 18:10:33.685275  392787 main.go:141] libmachine: (ha-365438-m02)       <backend model='random'>/dev/random</backend>
	I0916 18:10:33.685282  392787 main.go:141] libmachine: (ha-365438-m02)     </rng>
	I0916 18:10:33.685286  392787 main.go:141] libmachine: (ha-365438-m02)     
	I0916 18:10:33.685293  392787 main.go:141] libmachine: (ha-365438-m02)     
	I0916 18:10:33.685301  392787 main.go:141] libmachine: (ha-365438-m02)   </devices>
	I0916 18:10:33.685319  392787 main.go:141] libmachine: (ha-365438-m02) </domain>
	I0916 18:10:33.685335  392787 main.go:141] libmachine: (ha-365438-m02) 
	I0916 18:10:33.692250  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:93:a9:f4 in network default
	I0916 18:10:33.692837  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:33.692879  392787 main.go:141] libmachine: (ha-365438-m02) Ensuring networks are active...
	I0916 18:10:33.693646  392787 main.go:141] libmachine: (ha-365438-m02) Ensuring network default is active
	I0916 18:10:33.693968  392787 main.go:141] libmachine: (ha-365438-m02) Ensuring network mk-ha-365438 is active
	I0916 18:10:33.694323  392787 main.go:141] libmachine: (ha-365438-m02) Getting domain xml...
	I0916 18:10:33.695108  392787 main.go:141] libmachine: (ha-365438-m02) Creating domain...
	I0916 18:10:34.930246  392787 main.go:141] libmachine: (ha-365438-m02) Waiting to get IP...
	I0916 18:10:34.930981  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:34.931456  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:34.931477  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:34.931417  393164 retry.go:31] will retry after 235.385827ms: waiting for machine to come up
	I0916 18:10:35.169108  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:35.169640  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:35.169666  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:35.169598  393164 retry.go:31] will retry after 348.78948ms: waiting for machine to come up
	I0916 18:10:35.520267  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:35.520777  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:35.520802  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:35.520722  393164 retry.go:31] will retry after 422.811372ms: waiting for machine to come up
	I0916 18:10:35.945450  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:35.945886  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:35.945909  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:35.945861  393164 retry.go:31] will retry after 520.351266ms: waiting for machine to come up
	I0916 18:10:36.467407  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:36.467900  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:36.467929  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:36.467852  393164 retry.go:31] will retry after 750.8123ms: waiting for machine to come up
	I0916 18:10:37.219915  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:37.220404  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:37.220438  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:37.220351  393164 retry.go:31] will retry after 878.610223ms: waiting for machine to come up
	I0916 18:10:38.100678  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:38.101223  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:38.101252  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:38.101151  393164 retry.go:31] will retry after 782.076333ms: waiting for machine to come up
	I0916 18:10:38.884536  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:38.884997  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:38.885027  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:38.884914  393164 retry.go:31] will retry after 1.480505092s: waiting for machine to come up
	I0916 18:10:40.366675  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:40.367305  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:40.367345  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:40.367246  393164 retry.go:31] will retry after 1.861407296s: waiting for machine to come up
	I0916 18:10:42.231317  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:42.231771  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:42.231798  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:42.231712  393164 retry.go:31] will retry after 1.504488445s: waiting for machine to come up
	I0916 18:10:43.737950  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:43.738233  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:43.738262  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:43.738200  393164 retry.go:31] will retry after 1.87598511s: waiting for machine to come up
	I0916 18:10:45.616256  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:45.616716  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:45.616744  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:45.616666  393164 retry.go:31] will retry after 2.223821755s: waiting for machine to come up
	I0916 18:10:47.843191  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:47.843584  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:47.843607  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:47.843532  393164 retry.go:31] will retry after 3.555447139s: waiting for machine to come up
	I0916 18:10:51.402441  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:51.402828  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:51.402853  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:51.402798  393164 retry.go:31] will retry after 3.446453336s: waiting for machine to come up
	I0916 18:10:54.850944  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.851447  392787 main.go:141] libmachine: (ha-365438-m02) Found IP for machine: 192.168.39.18
	I0916 18:10:54.851476  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has current primary IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.851485  392787 main.go:141] libmachine: (ha-365438-m02) Reserving static IP address...
	I0916 18:10:54.852073  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find host DHCP lease matching {name: "ha-365438-m02", mac: "52:54:00:e9:b2:f7", ip: "192.168.39.18"} in network mk-ha-365438
	I0916 18:10:54.927598  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Getting to WaitForSSH function...
	I0916 18:10:54.927633  392787 main.go:141] libmachine: (ha-365438-m02) Reserved static IP address: 192.168.39.18
	I0916 18:10:54.927647  392787 main.go:141] libmachine: (ha-365438-m02) Waiting for SSH to be available...
	I0916 18:10:54.930258  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.930667  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:54.930701  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.930942  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Using SSH client type: external
	I0916 18:10:54.930968  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa (-rw-------)
	I0916 18:10:54.931002  392787 main.go:141] libmachine: (ha-365438-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 18:10:54.931016  392787 main.go:141] libmachine: (ha-365438-m02) DBG | About to run SSH command:
	I0916 18:10:54.931049  392787 main.go:141] libmachine: (ha-365438-m02) DBG | exit 0
	I0916 18:10:55.061259  392787 main.go:141] libmachine: (ha-365438-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 18:10:55.061561  392787 main.go:141] libmachine: (ha-365438-m02) KVM machine creation complete!
	I0916 18:10:55.061813  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetConfigRaw
	I0916 18:10:55.062383  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:55.062549  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:55.062742  392787 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 18:10:55.062756  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:10:55.064191  392787 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 18:10:55.064206  392787 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 18:10:55.064211  392787 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 18:10:55.064216  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.066231  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.066507  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.066535  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.066665  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.066836  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.066989  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.067125  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.067275  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.067508  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.067519  392787 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 18:10:55.180358  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:55.180406  392787 main.go:141] libmachine: Detecting the provisioner...
	I0916 18:10:55.180418  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.183181  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.183571  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.183599  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.183721  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.183916  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.184098  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.184207  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.184357  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.184579  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.184592  392787 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 18:10:55.298227  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 18:10:55.298321  392787 main.go:141] libmachine: found compatible host: buildroot
	I0916 18:10:55.298335  392787 main.go:141] libmachine: Provisioning with buildroot...
	I0916 18:10:55.298349  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:55.298609  392787 buildroot.go:166] provisioning hostname "ha-365438-m02"
	I0916 18:10:55.298629  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:55.298847  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.301662  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.302063  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.302091  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.302204  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.302398  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.302565  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.302721  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.302883  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.303092  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.303105  392787 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438-m02 && echo "ha-365438-m02" | sudo tee /etc/hostname
	I0916 18:10:55.431880  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438-m02
	
	I0916 18:10:55.431916  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.434778  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.435067  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.435101  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.435316  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.435517  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.435707  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.435817  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.435951  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.436169  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.436186  392787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:10:55.558139  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:55.558170  392787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:10:55.558191  392787 buildroot.go:174] setting up certificates
	I0916 18:10:55.558204  392787 provision.go:84] configureAuth start
	I0916 18:10:55.558216  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:55.558517  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:55.561254  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.561613  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.561646  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.561762  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.563980  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.564292  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.564319  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.564424  392787 provision.go:143] copyHostCerts
	I0916 18:10:55.564462  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:55.564501  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:10:55.564515  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:55.564595  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:10:55.564686  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:55.564704  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:10:55.564709  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:55.564735  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:10:55.564778  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:55.564794  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:10:55.564800  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:55.564820  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:10:55.564868  392787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438-m02 san=[127.0.0.1 192.168.39.18 ha-365438-m02 localhost minikube]
	I0916 18:10:55.659270  392787 provision.go:177] copyRemoteCerts
	I0916 18:10:55.659331  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:10:55.659357  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.662129  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.662465  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.662496  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.662767  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.662951  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.663118  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.663262  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:55.751469  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:10:55.751547  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:10:55.780545  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:10:55.780645  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 18:10:55.806978  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:10:55.807056  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 18:10:55.832267  392787 provision.go:87] duration metric: took 274.049415ms to configureAuth
	I0916 18:10:55.832301  392787 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:10:55.832484  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:55.832558  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.835052  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.835378  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.835424  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.835638  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.835858  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.836019  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.836161  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.836384  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.836602  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.836618  392787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:10:56.078921  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:10:56.078956  392787 main.go:141] libmachine: Checking connection to Docker...
	I0916 18:10:56.078965  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetURL
	I0916 18:10:56.080562  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Using libvirt version 6000000
	I0916 18:10:56.084040  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.084426  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.084455  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.084620  392787 main.go:141] libmachine: Docker is up and running!
	I0916 18:10:56.084639  392787 main.go:141] libmachine: Reticulating splines...
	I0916 18:10:56.084647  392787 client.go:171] duration metric: took 22.737750267s to LocalClient.Create
	I0916 18:10:56.084670  392787 start.go:167] duration metric: took 22.737847372s to libmachine.API.Create "ha-365438"
	I0916 18:10:56.084681  392787 start.go:293] postStartSetup for "ha-365438-m02" (driver="kvm2")
	I0916 18:10:56.084691  392787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:10:56.084717  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.084957  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:10:56.084982  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:56.087111  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.087449  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.087481  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.087639  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.087785  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.087934  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.088041  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:56.176159  392787 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:10:56.181304  392787 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:10:56.181340  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:10:56.181418  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:10:56.181506  392787 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:10:56.181518  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:10:56.181637  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:10:56.191699  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:56.217543  392787 start.go:296] duration metric: took 132.846204ms for postStartSetup
	I0916 18:10:56.217609  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetConfigRaw
	I0916 18:10:56.218265  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:56.221258  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.221691  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.221719  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.222100  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:56.222316  392787 start.go:128] duration metric: took 22.894847796s to createHost
	I0916 18:10:56.222342  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:56.224636  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.224968  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.224995  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.225137  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.225322  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.225486  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.225671  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.225848  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:56.226032  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:56.226042  392787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:10:56.341865  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510256.296605946
	
	I0916 18:10:56.341890  392787 fix.go:216] guest clock: 1726510256.296605946
	I0916 18:10:56.341897  392787 fix.go:229] Guest: 2024-09-16 18:10:56.296605946 +0000 UTC Remote: 2024-09-16 18:10:56.222328327 +0000 UTC m=+70.396802035 (delta=74.277619ms)
	I0916 18:10:56.341914  392787 fix.go:200] guest clock delta is within tolerance: 74.277619ms
	I0916 18:10:56.341919  392787 start.go:83] releasing machines lock for "ha-365438-m02", held for 23.014537993s
	I0916 18:10:56.341935  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.342207  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:56.345069  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.345454  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.345484  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.348193  392787 out.go:177] * Found network options:
	I0916 18:10:56.349645  392787 out.go:177]   - NO_PROXY=192.168.39.165
	W0916 18:10:56.351018  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:10:56.351055  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.351741  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.351947  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.352065  392787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:10:56.352102  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	W0916 18:10:56.352342  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:10:56.352416  392787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:10:56.352434  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:56.354999  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355229  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355370  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.355395  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355545  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.355676  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.355697  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355734  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.355857  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.355882  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.356053  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.356061  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:56.356189  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.356285  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:56.597368  392787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:10:56.604127  392787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:10:56.604217  392787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:10:56.621380  392787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 18:10:56.621409  392787 start.go:495] detecting cgroup driver to use...
	I0916 18:10:56.621472  392787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:10:56.638525  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:10:56.652832  392787 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:10:56.652895  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:10:56.666875  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:10:56.681432  392787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:10:56.794171  392787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:10:56.948541  392787 docker.go:233] disabling docker service ...
	I0916 18:10:56.948618  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:10:56.963290  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:10:56.977237  392787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:10:57.098314  392787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:10:57.214672  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:10:57.229040  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:10:57.250234  392787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:10:57.250298  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.261898  392787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:10:57.261986  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.273749  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.285791  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.297387  392787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:10:57.309408  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.320879  392787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.341575  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.354155  392787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:10:57.365273  392787 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 18:10:57.365348  392787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 18:10:57.378772  392787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:10:57.390283  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:10:57.514621  392787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:10:57.617876  392787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:10:57.617971  392787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:10:57.622722  392787 start.go:563] Will wait 60s for crictl version
	I0916 18:10:57.622780  392787 ssh_runner.go:195] Run: which crictl
	I0916 18:10:57.626607  392787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:10:57.666912  392787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:10:57.666997  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:57.696803  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:57.727098  392787 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:10:57.728534  392787 out.go:177]   - env NO_PROXY=192.168.39.165
	I0916 18:10:57.729864  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:57.732684  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:57.733062  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:57.733088  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:57.733256  392787 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:10:57.737616  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:10:57.750177  392787 mustload.go:65] Loading cluster: ha-365438
	I0916 18:10:57.750375  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:57.750632  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:57.750679  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:57.766219  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I0916 18:10:57.766714  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:57.767204  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:57.767226  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:57.767545  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:57.767740  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:57.769216  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:10:57.769502  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:57.769538  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:57.784842  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0916 18:10:57.785407  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:57.785928  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:57.785950  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:57.786284  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:57.786496  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:57.786735  392787 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.18
	I0916 18:10:57.786749  392787 certs.go:194] generating shared ca certs ...
	I0916 18:10:57.786766  392787 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:57.786930  392787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:10:57.786978  392787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:10:57.786991  392787 certs.go:256] generating profile certs ...
	I0916 18:10:57.787090  392787 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:10:57.787123  392787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637
	I0916 18:10:57.787143  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.18 192.168.39.254]
	I0916 18:10:58.073914  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637 ...
	I0916 18:10:58.073946  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637: {Name:mkc37b2841fab59ca238ea965ad7556f32ca348d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:58.074141  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637 ...
	I0916 18:10:58.074162  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637: {Name:mk10897fb048b3932b74ff1e856667592d87e1c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:58.074262  392787 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:10:58.074438  392787 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:10:58.074692  392787 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:10:58.074712  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:10:58.074728  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:10:58.074747  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:10:58.074765  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:10:58.074781  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:10:58.074798  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:10:58.074812  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:10:58.074831  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:10:58.074897  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:10:58.074943  392787 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:10:58.074957  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:10:58.074990  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:10:58.075057  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:10:58.075090  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:10:58.075142  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:58.075180  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.075201  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.075220  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.075261  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:58.078319  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:58.078735  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:58.078764  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:58.078968  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:58.079158  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:58.079317  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:58.079445  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:58.153424  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 18:10:58.159338  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 18:10:58.171317  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 18:10:58.175718  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 18:10:58.186383  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 18:10:58.190833  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 18:10:58.202444  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 18:10:58.207018  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 18:10:58.218636  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 18:10:58.223899  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 18:10:58.236134  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 18:10:58.240865  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 18:10:58.251722  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:10:58.279030  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:10:58.304460  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:10:58.329385  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:10:58.354574  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 18:10:58.378950  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 18:10:58.404566  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:10:58.429261  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:10:58.454157  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:10:58.480818  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:10:58.505156  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:10:58.529092  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 18:10:58.545843  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 18:10:58.562312  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 18:10:58.579473  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 18:10:58.596583  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 18:10:58.614423  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 18:10:58.631330  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 18:10:58.648585  392787 ssh_runner.go:195] Run: openssl version
	I0916 18:10:58.654567  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:10:58.665082  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.669458  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.669527  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.675385  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:10:58.686367  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:10:58.696961  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.701655  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.701715  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.707782  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:10:58.718999  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:10:58.730368  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.735333  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.735404  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.741338  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:10:58.752083  392787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:10:58.756412  392787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 18:10:58.756467  392787 kubeadm.go:934] updating node {m02 192.168.39.18 8443 v1.31.1 crio true true} ...
	I0916 18:10:58.756563  392787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:10:58.756596  392787 kube-vip.go:115] generating kube-vip config ...
	I0916 18:10:58.756635  392787 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:10:58.773717  392787 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:10:58.773796  392787 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:10:58.773854  392787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:10:58.783882  392787 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 18:10:58.783972  392787 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 18:10:58.793538  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 18:10:58.793569  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:10:58.793613  392787 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 18:10:58.793638  392787 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 18:10:58.793671  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:10:58.798198  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 18:10:58.798226  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 18:10:59.865446  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:10:59.865538  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:10:59.870686  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 18:10:59.870730  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 18:10:59.898327  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:10:59.924669  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:10:59.924798  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:10:59.935712  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 18:10:59.935762  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 18:11:00.432610  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 18:11:00.442560  392787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 18:11:00.459845  392787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:11:00.476474  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 18:11:00.493079  392787 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:11:00.496926  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:11:00.508998  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:11:00.634897  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:11:00.652295  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:11:00.652800  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:11:00.652856  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:11:00.668024  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0916 18:11:00.668537  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:11:00.669099  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:11:00.669129  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:11:00.669453  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:11:00.669589  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:11:00.669709  392787 start.go:317] joinCluster: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:11:00.669835  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 18:11:00.669857  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:11:00.672691  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:11:00.673188  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:11:00.673216  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:11:00.673356  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:11:00.673556  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:11:00.673716  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:11:00.673853  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:11:00.826960  392787 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:11:00.827002  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zk8jja.80gx1qy4gw2fhz4q --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m02 --control-plane --apiserver-advertise-address=192.168.39.18 --apiserver-bind-port=8443"
	I0916 18:11:24.557299  392787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zk8jja.80gx1qy4gw2fhz4q --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m02 --control-plane --apiserver-advertise-address=192.168.39.18 --apiserver-bind-port=8443": (23.730266599s)
	I0916 18:11:24.557356  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 18:11:25.076897  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-365438-m02 minikube.k8s.io/updated_at=2024_09_16T18_11_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=ha-365438 minikube.k8s.io/primary=false
	I0916 18:11:25.234370  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-365438-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 18:11:25.377310  392787 start.go:319] duration metric: took 24.707595419s to joinCluster
	I0916 18:11:25.377403  392787 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:11:25.377705  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:11:25.379208  392787 out.go:177] * Verifying Kubernetes components...
	I0916 18:11:25.380483  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:11:25.648629  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:11:25.671202  392787 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:11:25.671590  392787 kapi.go:59] client config for ha-365438: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt", KeyFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key", CAFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 18:11:25.671700  392787 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0916 18:11:25.672027  392787 node_ready.go:35] waiting up to 6m0s for node "ha-365438-m02" to be "Ready" ...
	I0916 18:11:25.672155  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:25.672168  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:25.672179  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:25.672185  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:25.685584  392787 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0916 18:11:26.172726  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:26.172751  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:26.172759  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:26.172763  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:26.176508  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:26.672501  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:26.672533  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:26.672543  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:26.672548  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:26.675715  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:27.173049  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:27.173081  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:27.173094  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:27.173100  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:27.178406  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:27.672326  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:27.672355  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:27.672367  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:27.672372  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:27.676185  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:27.676767  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:28.172971  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:28.172997  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:28.173006  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:28.173011  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:28.178047  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:28.673276  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:28.673300  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:28.673309  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:28.673313  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:28.677214  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:29.173079  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:29.173103  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:29.173111  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:29.173116  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:29.176619  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:29.672762  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:29.672789  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:29.672808  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:29.672814  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:29.676012  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:29.676888  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:30.173233  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:30.173259  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:30.173270  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:30.173277  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:30.176469  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:30.672536  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:30.672559  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:30.672567  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:30.672572  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:30.677943  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:31.172672  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:31.172700  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:31.172712  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:31.172719  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:31.177147  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:31.673251  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:31.673276  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:31.673285  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:31.673291  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:31.676778  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:31.677783  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:32.173152  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:32.173184  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:32.173197  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:32.173204  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:32.176580  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:32.672803  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:32.672828  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:32.672836  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:32.672841  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:32.736554  392787 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0916 18:11:33.173100  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:33.173123  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:33.173130  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:33.173135  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:33.176875  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:33.672479  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:33.672501  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:33.672510  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:33.672514  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:33.676089  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:34.173241  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:34.173273  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:34.173285  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:34.173291  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:34.176811  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:34.177466  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:34.672682  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:34.672706  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:34.672714  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:34.672718  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:34.676308  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:35.172941  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:35.172964  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:35.172973  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:35.172977  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:35.176362  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:35.672238  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:35.672264  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:35.672273  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:35.672277  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:35.676061  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:36.172969  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:36.173006  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:36.173015  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:36.173020  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:36.177121  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:36.177718  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:36.673112  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:36.673138  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:36.673147  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:36.673150  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:36.676423  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:37.172552  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:37.172578  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:37.172587  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:37.172591  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:37.176604  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:37.672936  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:37.672959  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:37.672970  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:37.672978  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:37.677363  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:38.172576  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:38.172601  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:38.172609  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:38.172615  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:38.176529  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:38.673253  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:38.673278  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:38.673289  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:38.673293  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:38.676581  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:38.677188  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:39.172551  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:39.172579  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:39.172588  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:39.172592  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:39.175634  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:39.672620  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:39.672644  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:39.672653  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:39.672657  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:39.676111  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:40.173176  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:40.173205  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:40.173216  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:40.173222  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:40.176742  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:40.672973  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:40.672998  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:40.673008  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:40.673014  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:40.676608  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:40.677266  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:41.173281  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:41.173307  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:41.173319  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:41.173323  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:41.177471  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:41.672288  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:41.672311  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:41.672320  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:41.672325  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:41.675832  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:42.172362  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:42.172390  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:42.172399  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:42.172403  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:42.176800  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:42.672515  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:42.672537  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:42.672546  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:42.672550  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:42.675794  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:43.172879  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:43.172905  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:43.172928  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:43.172935  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:43.176475  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:43.177117  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:43.672959  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:43.672983  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:43.672991  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:43.672995  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:43.676640  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.172513  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.172536  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.172545  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.172549  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.176127  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.176873  392787 node_ready.go:49] node "ha-365438-m02" has status "Ready":"True"
	I0916 18:11:44.176898  392787 node_ready.go:38] duration metric: took 18.504846955s for node "ha-365438-m02" to be "Ready" ...
	I0916 18:11:44.176924  392787 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:11:44.177046  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:44.177058  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.177068  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.177075  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.181938  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:44.188581  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.188703  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9svk8
	I0916 18:11:44.188715  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.188726  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.188731  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.192571  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.193418  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.193435  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.193442  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.193448  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.196227  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.196963  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.196985  392787 pod_ready.go:82] duration metric: took 8.375088ms for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.196995  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.197070  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zh7sm
	I0916 18:11:44.197079  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.197086  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.197091  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.200092  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.201125  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.201142  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.201152  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.201157  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.203717  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.204184  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.204204  392787 pod_ready.go:82] duration metric: took 7.203495ms for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.204216  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.204349  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438
	I0916 18:11:44.204360  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.204367  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.204374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.207253  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.208118  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.208144  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.208152  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.208158  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.212944  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:44.213817  392787 pod_ready.go:93] pod "etcd-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.213842  392787 pod_ready.go:82] duration metric: took 9.614804ms for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.213855  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.213941  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m02
	I0916 18:11:44.213952  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.213961  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.213969  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.216855  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.217554  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.217569  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.217582  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.217587  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.219890  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.220524  392787 pod_ready.go:93] pod "etcd-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.220547  392787 pod_ready.go:82] duration metric: took 6.680434ms for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.220566  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.373021  392787 request.go:632] Waited for 152.359224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:11:44.373104  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:11:44.373110  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.373121  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.373130  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.376513  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.573537  392787 request.go:632] Waited for 196.392944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.573621  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.573632  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.573643  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.573651  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.576401  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.576942  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.576963  392787 pod_ready.go:82] duration metric: took 356.389594ms for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.576973  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.773145  392787 request.go:632] Waited for 196.07609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:11:44.773235  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:11:44.773242  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.773252  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.773257  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.776702  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.972986  392787 request.go:632] Waited for 195.41926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.973068  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.973073  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.973081  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.973087  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.976276  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.977082  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.977104  392787 pod_ready.go:82] duration metric: took 400.123141ms for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.977116  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.173208  392787 request.go:632] Waited for 195.990306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:11:45.173296  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:11:45.173304  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.173315  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.173326  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.177405  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:45.373413  392787 request.go:632] Waited for 195.387676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:45.373475  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:45.373480  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.373486  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.373492  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.377394  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:45.378061  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:45.378084  392787 pod_ready.go:82] duration metric: took 400.960417ms for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.378094  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.573146  392787 request.go:632] Waited for 194.944123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:11:45.573222  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:11:45.573230  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.573242  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.573253  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.576584  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:45.773583  392787 request.go:632] Waited for 196.311224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:45.773653  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:45.773660  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.773668  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.773682  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.776606  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:45.777194  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:45.777216  392787 pod_ready.go:82] duration metric: took 399.114761ms for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.777229  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.973560  392787 request.go:632] Waited for 196.245182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:11:45.973661  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:11:45.973673  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.973684  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.973693  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.976599  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:46.172598  392787 request.go:632] Waited for 195.271477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.172688  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.172695  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.172706  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.172712  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.176099  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.176619  392787 pod_ready.go:93] pod "kube-proxy-4rfbj" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:46.176641  392787 pod_ready.go:82] duration metric: took 399.404319ms for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.176654  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.372626  392787 request.go:632] Waited for 195.863267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:11:46.372710  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:11:46.372717  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.372729  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.372740  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.376508  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.573484  392787 request.go:632] Waited for 196.34687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:46.573568  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:46.573573  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.573580  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.573588  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.577714  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:46.578414  392787 pod_ready.go:93] pod "kube-proxy-nrqvf" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:46.578435  392787 pod_ready.go:82] duration metric: took 401.773565ms for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.578444  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.772584  392787 request.go:632] Waited for 194.03345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:11:46.772658  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:11:46.772666  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.772678  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.772687  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.775938  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.973046  392787 request.go:632] Waited for 196.365949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.973110  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.973115  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.973123  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.973127  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.976724  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.977346  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:46.977372  392787 pod_ready.go:82] duration metric: took 398.918632ms for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.977388  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:47.173516  392787 request.go:632] Waited for 196.023516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:11:47.173584  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:11:47.173593  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.173603  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.173611  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.177050  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:47.373257  392787 request.go:632] Waited for 195.422038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:47.373411  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:47.373423  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.373434  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.373444  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.377220  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:47.377734  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:47.377757  392787 pod_ready.go:82] duration metric: took 400.356993ms for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:47.377771  392787 pod_ready.go:39] duration metric: took 3.2008242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:11:47.377792  392787 api_server.go:52] waiting for apiserver process to appear ...
	I0916 18:11:47.377906  392787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:11:47.394796  392787 api_server.go:72] duration metric: took 22.017327201s to wait for apiserver process to appear ...
	I0916 18:11:47.394830  392787 api_server.go:88] waiting for apiserver healthz status ...
	I0916 18:11:47.394858  392787 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0916 18:11:47.400272  392787 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0916 18:11:47.400351  392787 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0916 18:11:47.400359  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.400368  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.400374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.401426  392787 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 18:11:47.401533  392787 api_server.go:141] control plane version: v1.31.1
	I0916 18:11:47.401550  392787 api_server.go:131] duration metric: took 6.712256ms to wait for apiserver health ...
	I0916 18:11:47.401559  392787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 18:11:47.572998  392787 request.go:632] Waited for 171.354317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.573097  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.573106  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.573119  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.573128  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.585382  392787 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0916 18:11:47.591130  392787 system_pods.go:59] 17 kube-system pods found
	I0916 18:11:47.591174  392787 system_pods.go:61] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:11:47.591182  392787 system_pods.go:61] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:11:47.591186  392787 system_pods.go:61] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:11:47.591189  392787 system_pods.go:61] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:11:47.591193  392787 system_pods.go:61] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:11:47.591196  392787 system_pods.go:61] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:11:47.591205  392787 system_pods.go:61] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:11:47.591213  392787 system_pods.go:61] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:11:47.591216  392787 system_pods.go:61] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:11:47.591219  392787 system_pods.go:61] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:11:47.591222  392787 system_pods.go:61] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:11:47.591226  392787 system_pods.go:61] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:11:47.591229  392787 system_pods.go:61] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:11:47.591232  392787 system_pods.go:61] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:11:47.591235  392787 system_pods.go:61] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:11:47.591238  392787 system_pods.go:61] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:11:47.591241  392787 system_pods.go:61] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:11:47.591247  392787 system_pods.go:74] duration metric: took 189.679883ms to wait for pod list to return data ...
	I0916 18:11:47.591257  392787 default_sa.go:34] waiting for default service account to be created ...
	I0916 18:11:47.772686  392787 request.go:632] Waited for 181.316746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:11:47.772751  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:11:47.772756  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.772764  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.772769  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.776463  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:47.776725  392787 default_sa.go:45] found service account: "default"
	I0916 18:11:47.776744  392787 default_sa.go:55] duration metric: took 185.478694ms for default service account to be created ...
	I0916 18:11:47.776752  392787 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 18:11:47.972968  392787 request.go:632] Waited for 196.10847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.973076  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.973087  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.973098  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.973109  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.978423  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:47.983687  392787 system_pods.go:86] 17 kube-system pods found
	I0916 18:11:47.983722  392787 system_pods.go:89] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:11:47.983728  392787 system_pods.go:89] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:11:47.983737  392787 system_pods.go:89] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:11:47.983742  392787 system_pods.go:89] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:11:47.983747  392787 system_pods.go:89] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:11:47.983750  392787 system_pods.go:89] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:11:47.983753  392787 system_pods.go:89] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:11:47.983757  392787 system_pods.go:89] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:11:47.983760  392787 system_pods.go:89] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:11:47.983764  392787 system_pods.go:89] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:11:47.983767  392787 system_pods.go:89] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:11:47.983770  392787 system_pods.go:89] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:11:47.983773  392787 system_pods.go:89] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:11:47.983777  392787 system_pods.go:89] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:11:47.983782  392787 system_pods.go:89] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:11:47.983785  392787 system_pods.go:89] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:11:47.983788  392787 system_pods.go:89] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:11:47.983795  392787 system_pods.go:126] duration metric: took 207.036892ms to wait for k8s-apps to be running ...
	I0916 18:11:47.983805  392787 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 18:11:47.983851  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:11:47.999009  392787 system_svc.go:56] duration metric: took 15.186653ms WaitForService to wait for kubelet
	I0916 18:11:47.999054  392787 kubeadm.go:582] duration metric: took 22.621593946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:11:47.999081  392787 node_conditions.go:102] verifying NodePressure condition ...
	I0916 18:11:48.173654  392787 request.go:632] Waited for 174.446242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0916 18:11:48.173725  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0916 18:11:48.173733  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:48.173745  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:48.173752  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:48.178018  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:48.178795  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:11:48.178824  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:11:48.178840  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:11:48.178845  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:11:48.178851  392787 node_conditions.go:105] duration metric: took 179.764557ms to run NodePressure ...
	I0916 18:11:48.178866  392787 start.go:241] waiting for startup goroutines ...
	I0916 18:11:48.178904  392787 start.go:255] writing updated cluster config ...
	I0916 18:11:48.181519  392787 out.go:201] 
	I0916 18:11:48.183337  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:11:48.183448  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:11:48.185191  392787 out.go:177] * Starting "ha-365438-m03" control-plane node in "ha-365438" cluster
	I0916 18:11:48.186550  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:11:48.186588  392787 cache.go:56] Caching tarball of preloaded images
	I0916 18:11:48.186760  392787 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:11:48.186776  392787 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:11:48.186919  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:11:48.187167  392787 start.go:360] acquireMachinesLock for ha-365438-m03: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:11:48.187230  392787 start.go:364] duration metric: took 33.205µs to acquireMachinesLock for "ha-365438-m03"
	I0916 18:11:48.187269  392787 start.go:93] Provisioning new machine with config: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:11:48.187461  392787 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 18:11:48.189846  392787 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 18:11:48.189969  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:11:48.190012  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:11:48.205644  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0916 18:11:48.206157  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:11:48.206787  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:11:48.206812  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:11:48.207140  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:11:48.207336  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:11:48.207480  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:11:48.207616  392787 start.go:159] libmachine.API.Create for "ha-365438" (driver="kvm2")
	I0916 18:11:48.207643  392787 client.go:168] LocalClient.Create starting
	I0916 18:11:48.207672  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 18:11:48.207708  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:11:48.207722  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:11:48.207796  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 18:11:48.207815  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:11:48.207826  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:11:48.207842  392787 main.go:141] libmachine: Running pre-create checks...
	I0916 18:11:48.207850  392787 main.go:141] libmachine: (ha-365438-m03) Calling .PreCreateCheck
	I0916 18:11:48.207998  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetConfigRaw
	I0916 18:11:48.208444  392787 main.go:141] libmachine: Creating machine...
	I0916 18:11:48.208458  392787 main.go:141] libmachine: (ha-365438-m03) Calling .Create
	I0916 18:11:48.208610  392787 main.go:141] libmachine: (ha-365438-m03) Creating KVM machine...
	I0916 18:11:48.209971  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found existing default KVM network
	I0916 18:11:48.210053  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found existing private KVM network mk-ha-365438
	I0916 18:11:48.210156  392787 main.go:141] libmachine: (ha-365438-m03) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03 ...
	I0916 18:11:48.210193  392787 main.go:141] libmachine: (ha-365438-m03) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 18:11:48.210295  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.210172  393559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:11:48.210435  392787 main.go:141] libmachine: (ha-365438-m03) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 18:11:48.483007  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.482852  393559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa...
	I0916 18:11:48.658840  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.658716  393559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/ha-365438-m03.rawdisk...
	I0916 18:11:48.658867  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Writing magic tar header
	I0916 18:11:48.658878  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Writing SSH key tar header
	I0916 18:11:48.658889  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.658828  393559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03 ...
	I0916 18:11:48.658968  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03
	I0916 18:11:48.659000  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03 (perms=drwx------)
	I0916 18:11:48.659011  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 18:11:48.659026  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:11:48.659038  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 18:11:48.659048  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 18:11:48.659077  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 18:11:48.659089  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 18:11:48.659103  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 18:11:48.659116  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 18:11:48.659123  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 18:11:48.659131  392787 main.go:141] libmachine: (ha-365438-m03) Creating domain...
	I0916 18:11:48.659140  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 18:11:48.659150  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home
	I0916 18:11:48.659162  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Skipping /home - not owner
	I0916 18:11:48.659979  392787 main.go:141] libmachine: (ha-365438-m03) define libvirt domain using xml: 
	I0916 18:11:48.660009  392787 main.go:141] libmachine: (ha-365438-m03) <domain type='kvm'>
	I0916 18:11:48.660019  392787 main.go:141] libmachine: (ha-365438-m03)   <name>ha-365438-m03</name>
	I0916 18:11:48.660028  392787 main.go:141] libmachine: (ha-365438-m03)   <memory unit='MiB'>2200</memory>
	I0916 18:11:48.660036  392787 main.go:141] libmachine: (ha-365438-m03)   <vcpu>2</vcpu>
	I0916 18:11:48.660045  392787 main.go:141] libmachine: (ha-365438-m03)   <features>
	I0916 18:11:48.660056  392787 main.go:141] libmachine: (ha-365438-m03)     <acpi/>
	I0916 18:11:48.660065  392787 main.go:141] libmachine: (ha-365438-m03)     <apic/>
	I0916 18:11:48.660076  392787 main.go:141] libmachine: (ha-365438-m03)     <pae/>
	I0916 18:11:48.660084  392787 main.go:141] libmachine: (ha-365438-m03)     
	I0916 18:11:48.660120  392787 main.go:141] libmachine: (ha-365438-m03)   </features>
	I0916 18:11:48.660143  392787 main.go:141] libmachine: (ha-365438-m03)   <cpu mode='host-passthrough'>
	I0916 18:11:48.660155  392787 main.go:141] libmachine: (ha-365438-m03)   
	I0916 18:11:48.660164  392787 main.go:141] libmachine: (ha-365438-m03)   </cpu>
	I0916 18:11:48.660175  392787 main.go:141] libmachine: (ha-365438-m03)   <os>
	I0916 18:11:48.660190  392787 main.go:141] libmachine: (ha-365438-m03)     <type>hvm</type>
	I0916 18:11:48.660201  392787 main.go:141] libmachine: (ha-365438-m03)     <boot dev='cdrom'/>
	I0916 18:11:48.660209  392787 main.go:141] libmachine: (ha-365438-m03)     <boot dev='hd'/>
	I0916 18:11:48.660220  392787 main.go:141] libmachine: (ha-365438-m03)     <bootmenu enable='no'/>
	I0916 18:11:48.660229  392787 main.go:141] libmachine: (ha-365438-m03)   </os>
	I0916 18:11:48.660239  392787 main.go:141] libmachine: (ha-365438-m03)   <devices>
	I0916 18:11:48.660246  392787 main.go:141] libmachine: (ha-365438-m03)     <disk type='file' device='cdrom'>
	I0916 18:11:48.660261  392787 main.go:141] libmachine: (ha-365438-m03)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/boot2docker.iso'/>
	I0916 18:11:48.660272  392787 main.go:141] libmachine: (ha-365438-m03)       <target dev='hdc' bus='scsi'/>
	I0916 18:11:48.660283  392787 main.go:141] libmachine: (ha-365438-m03)       <readonly/>
	I0916 18:11:48.660296  392787 main.go:141] libmachine: (ha-365438-m03)     </disk>
	I0916 18:11:48.660308  392787 main.go:141] libmachine: (ha-365438-m03)     <disk type='file' device='disk'>
	I0916 18:11:48.660333  392787 main.go:141] libmachine: (ha-365438-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 18:11:48.660349  392787 main.go:141] libmachine: (ha-365438-m03)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/ha-365438-m03.rawdisk'/>
	I0916 18:11:48.660359  392787 main.go:141] libmachine: (ha-365438-m03)       <target dev='hda' bus='virtio'/>
	I0916 18:11:48.660368  392787 main.go:141] libmachine: (ha-365438-m03)     </disk>
	I0916 18:11:48.660379  392787 main.go:141] libmachine: (ha-365438-m03)     <interface type='network'>
	I0916 18:11:48.660396  392787 main.go:141] libmachine: (ha-365438-m03)       <source network='mk-ha-365438'/>
	I0916 18:11:48.660407  392787 main.go:141] libmachine: (ha-365438-m03)       <model type='virtio'/>
	I0916 18:11:48.660417  392787 main.go:141] libmachine: (ha-365438-m03)     </interface>
	I0916 18:11:48.660444  392787 main.go:141] libmachine: (ha-365438-m03)     <interface type='network'>
	I0916 18:11:48.660454  392787 main.go:141] libmachine: (ha-365438-m03)       <source network='default'/>
	I0916 18:11:48.660463  392787 main.go:141] libmachine: (ha-365438-m03)       <model type='virtio'/>
	I0916 18:11:48.660472  392787 main.go:141] libmachine: (ha-365438-m03)     </interface>
	I0916 18:11:48.660482  392787 main.go:141] libmachine: (ha-365438-m03)     <serial type='pty'>
	I0916 18:11:48.660491  392787 main.go:141] libmachine: (ha-365438-m03)       <target port='0'/>
	I0916 18:11:48.660502  392787 main.go:141] libmachine: (ha-365438-m03)     </serial>
	I0916 18:11:48.660512  392787 main.go:141] libmachine: (ha-365438-m03)     <console type='pty'>
	I0916 18:11:48.660523  392787 main.go:141] libmachine: (ha-365438-m03)       <target type='serial' port='0'/>
	I0916 18:11:48.660532  392787 main.go:141] libmachine: (ha-365438-m03)     </console>
	I0916 18:11:48.660549  392787 main.go:141] libmachine: (ha-365438-m03)     <rng model='virtio'>
	I0916 18:11:48.660567  392787 main.go:141] libmachine: (ha-365438-m03)       <backend model='random'>/dev/random</backend>
	I0916 18:11:48.660579  392787 main.go:141] libmachine: (ha-365438-m03)     </rng>
	I0916 18:11:48.660595  392787 main.go:141] libmachine: (ha-365438-m03)     
	I0916 18:11:48.660609  392787 main.go:141] libmachine: (ha-365438-m03)     
	I0916 18:11:48.660618  392787 main.go:141] libmachine: (ha-365438-m03)   </devices>
	I0916 18:11:48.660628  392787 main.go:141] libmachine: (ha-365438-m03) </domain>
	I0916 18:11:48.660640  392787 main.go:141] libmachine: (ha-365438-m03) 
	I0916 18:11:48.667531  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:50:76:20 in network default
	I0916 18:11:48.668111  392787 main.go:141] libmachine: (ha-365438-m03) Ensuring networks are active...
	I0916 18:11:48.668134  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:48.668790  392787 main.go:141] libmachine: (ha-365438-m03) Ensuring network default is active
	I0916 18:11:48.669178  392787 main.go:141] libmachine: (ha-365438-m03) Ensuring network mk-ha-365438 is active
	I0916 18:11:48.669602  392787 main.go:141] libmachine: (ha-365438-m03) Getting domain xml...
	I0916 18:11:48.670284  392787 main.go:141] libmachine: (ha-365438-m03) Creating domain...
	I0916 18:11:49.916314  392787 main.go:141] libmachine: (ha-365438-m03) Waiting to get IP...
	I0916 18:11:49.917055  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:49.917486  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:49.917525  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:49.917469  393559 retry.go:31] will retry after 198.51809ms: waiting for machine to come up
	I0916 18:11:50.117986  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:50.118535  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:50.118560  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:50.118479  393559 retry.go:31] will retry after 368.043611ms: waiting for machine to come up
	I0916 18:11:50.488070  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:50.488581  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:50.488610  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:50.488537  393559 retry.go:31] will retry after 388.359286ms: waiting for machine to come up
	I0916 18:11:50.877948  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:50.878401  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:50.878490  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:50.878376  393559 retry.go:31] will retry after 367.062779ms: waiting for machine to come up
	I0916 18:11:51.246933  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:51.247515  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:51.247548  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:51.247463  393559 retry.go:31] will retry after 517.788094ms: waiting for machine to come up
	I0916 18:11:51.767063  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:51.767627  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:51.767650  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:51.767582  393559 retry.go:31] will retry after 836.830273ms: waiting for machine to come up
	I0916 18:11:52.606349  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:52.606737  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:52.606766  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:52.606704  393559 retry.go:31] will retry after 884.544993ms: waiting for machine to come up
	I0916 18:11:53.493201  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:53.493736  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:53.493762  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:53.493701  393559 retry.go:31] will retry after 1.007434851s: waiting for machine to come up
	I0916 18:11:54.503181  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:54.503551  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:54.503600  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:54.503511  393559 retry.go:31] will retry after 1.759545297s: waiting for machine to come up
	I0916 18:11:56.264502  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:56.264997  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:56.265029  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:56.264905  393559 retry.go:31] will retry after 2.178225549s: waiting for machine to come up
	I0916 18:11:58.444424  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:58.444913  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:58.444952  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:58.444850  393559 retry.go:31] will retry after 2.536690522s: waiting for machine to come up
	I0916 18:12:00.982928  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:00.983341  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:12:00.983364  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:12:00.983305  393559 retry.go:31] will retry after 2.6089067s: waiting for machine to come up
	I0916 18:12:03.593830  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:03.594390  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:12:03.594413  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:12:03.594324  393559 retry.go:31] will retry after 4.326497593s: waiting for machine to come up
	I0916 18:12:07.925823  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:07.926196  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:12:07.926220  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:12:07.926153  393559 retry.go:31] will retry after 4.753851469s: waiting for machine to come up
	I0916 18:12:12.684646  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.685156  392787 main.go:141] libmachine: (ha-365438-m03) Found IP for machine: 192.168.39.231
	I0916 18:12:12.685182  392787 main.go:141] libmachine: (ha-365438-m03) Reserving static IP address...
	I0916 18:12:12.685195  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has current primary IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.685590  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find host DHCP lease matching {name: "ha-365438-m03", mac: "52:54:00:ac:e5:94", ip: "192.168.39.231"} in network mk-ha-365438
	I0916 18:12:12.761275  392787 main.go:141] libmachine: (ha-365438-m03) Reserved static IP address: 192.168.39.231
	I0916 18:12:12.761310  392787 main.go:141] libmachine: (ha-365438-m03) Waiting for SSH to be available...
	I0916 18:12:12.761319  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Getting to WaitForSSH function...
	I0916 18:12:12.764567  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.765135  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:12.765161  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.765395  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Using SSH client type: external
	I0916 18:12:12.765421  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa (-rw-------)
	I0916 18:12:12.765449  392787 main.go:141] libmachine: (ha-365438-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 18:12:12.765467  392787 main.go:141] libmachine: (ha-365438-m03) DBG | About to run SSH command:
	I0916 18:12:12.765483  392787 main.go:141] libmachine: (ha-365438-m03) DBG | exit 0
	I0916 18:12:12.893201  392787 main.go:141] libmachine: (ha-365438-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 18:12:12.893458  392787 main.go:141] libmachine: (ha-365438-m03) KVM machine creation complete!
	I0916 18:12:12.893817  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetConfigRaw
	I0916 18:12:12.894411  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:12.894635  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:12.894798  392787 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 18:12:12.894816  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:12:12.896330  392787 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 18:12:12.896345  392787 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 18:12:12.896352  392787 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 18:12:12.896360  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:12.898798  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.899139  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:12.899167  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.899350  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:12.899563  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:12.899722  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:12.899864  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:12.900011  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:12.900269  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:12.900281  392787 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 18:12:13.008569  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:12:13.008593  392787 main.go:141] libmachine: Detecting the provisioner...
	I0916 18:12:13.008601  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.011614  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.012064  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.012095  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.012238  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.012487  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.012691  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.012823  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.012999  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.013182  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.013194  392787 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 18:12:13.122122  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 18:12:13.122217  392787 main.go:141] libmachine: found compatible host: buildroot
	I0916 18:12:13.122231  392787 main.go:141] libmachine: Provisioning with buildroot...
	I0916 18:12:13.122246  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:12:13.122508  392787 buildroot.go:166] provisioning hostname "ha-365438-m03"
	I0916 18:12:13.122543  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:12:13.122756  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.125571  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.126197  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.126227  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.126608  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.126864  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.127078  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.127268  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.127497  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.127714  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.127727  392787 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438-m03 && echo "ha-365438-m03" | sudo tee /etc/hostname
	I0916 18:12:13.252848  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438-m03
	
	I0916 18:12:13.252889  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.255720  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.256099  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.256131  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.256322  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.256701  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.256885  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.257073  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.257255  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.257425  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.257442  392787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:12:13.375127  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:12:13.375159  392787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:12:13.375183  392787 buildroot.go:174] setting up certificates
	I0916 18:12:13.375195  392787 provision.go:84] configureAuth start
	I0916 18:12:13.375208  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:12:13.375530  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:13.378260  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.378510  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.378532  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.378673  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.380726  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.381127  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.381157  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.381308  392787 provision.go:143] copyHostCerts
	I0916 18:12:13.381338  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:12:13.381371  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:12:13.381380  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:12:13.381447  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:12:13.381524  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:12:13.381541  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:12:13.381547  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:12:13.381575  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:12:13.381636  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:12:13.381666  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:12:13.381679  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:12:13.381713  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:12:13.381772  392787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438-m03 san=[127.0.0.1 192.168.39.231 ha-365438-m03 localhost minikube]
	I0916 18:12:13.515688  392787 provision.go:177] copyRemoteCerts
	I0916 18:12:13.515749  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:12:13.515777  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.518663  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.518955  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.518976  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.519173  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.519363  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.519503  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.519682  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:13.603320  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:12:13.603411  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 18:12:13.629247  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:12:13.629317  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 18:12:13.654026  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:12:13.654116  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:12:13.680776  392787 provision.go:87] duration metric: took 305.564483ms to configureAuth
	I0916 18:12:13.680813  392787 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:12:13.681128  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:12:13.681236  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.684310  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.684738  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.684769  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.684966  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.685174  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.685337  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.685488  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.685647  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.685859  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.685885  392787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:12:13.926138  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:12:13.926174  392787 main.go:141] libmachine: Checking connection to Docker...
	I0916 18:12:13.926185  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetURL
	I0916 18:12:13.927640  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Using libvirt version 6000000
	I0916 18:12:13.929849  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.930175  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.930198  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.930397  392787 main.go:141] libmachine: Docker is up and running!
	I0916 18:12:13.930418  392787 main.go:141] libmachine: Reticulating splines...
	I0916 18:12:13.930426  392787 client.go:171] duration metric: took 25.722776003s to LocalClient.Create
	I0916 18:12:13.930449  392787 start.go:167] duration metric: took 25.722834457s to libmachine.API.Create "ha-365438"
	I0916 18:12:13.930458  392787 start.go:293] postStartSetup for "ha-365438-m03" (driver="kvm2")
	I0916 18:12:13.930468  392787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:12:13.930487  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:13.930720  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:12:13.930744  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.932830  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.933169  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.933192  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.933321  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.933491  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.933636  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.933751  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:14.021119  392787 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:12:14.025372  392787 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:12:14.025404  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:12:14.025472  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:12:14.025563  392787 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:12:14.025577  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:12:14.025704  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:12:14.037240  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:12:14.063223  392787 start.go:296] duration metric: took 132.749962ms for postStartSetup
	I0916 18:12:14.063293  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetConfigRaw
	I0916 18:12:14.064019  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:14.066928  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.067342  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.067371  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.067659  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:12:14.067883  392787 start.go:128] duration metric: took 25.880405444s to createHost
	I0916 18:12:14.067918  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:14.070357  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.070728  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.070757  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.070893  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:14.071079  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.071222  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.071322  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:14.071492  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:14.071677  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:14.071694  392787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:12:14.182399  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510334.158889156
	
	I0916 18:12:14.182427  392787 fix.go:216] guest clock: 1726510334.158889156
	I0916 18:12:14.182437  392787 fix.go:229] Guest: 2024-09-16 18:12:14.158889156 +0000 UTC Remote: 2024-09-16 18:12:14.067900348 +0000 UTC m=+148.242374056 (delta=90.988808ms)
	I0916 18:12:14.182460  392787 fix.go:200] guest clock delta is within tolerance: 90.988808ms
	I0916 18:12:14.182467  392787 start.go:83] releasing machines lock for "ha-365438-m03", held for 25.995224257s
	I0916 18:12:14.182489  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.182814  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:14.186304  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.186750  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.186783  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.189603  392787 out.go:177] * Found network options:
	I0916 18:12:14.191277  392787 out.go:177]   - NO_PROXY=192.168.39.165,192.168.39.18
	W0916 18:12:14.193262  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 18:12:14.193294  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:12:14.193318  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.194050  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.194279  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.194421  392787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:12:14.194468  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	W0916 18:12:14.194506  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 18:12:14.194531  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:12:14.194609  392787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:12:14.194635  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:14.197775  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.197801  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.198169  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.198199  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.198225  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.198245  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.198305  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:14.198455  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:14.198606  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.198636  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.198775  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:14.198783  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:14.198998  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:14.198997  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:14.448954  392787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:12:14.455918  392787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:12:14.456003  392787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:12:14.476545  392787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 18:12:14.476582  392787 start.go:495] detecting cgroup driver to use...
	I0916 18:12:14.476663  392787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:12:14.496278  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:12:14.512278  392787 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:12:14.512337  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:12:14.527627  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:12:14.542070  392787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:12:14.680011  392787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:12:14.828286  392787 docker.go:233] disabling docker service ...
	I0916 18:12:14.828379  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:12:14.844496  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:12:14.859761  392787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:12:14.993508  392787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:12:15.124977  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:12:15.140329  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:12:15.160341  392787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:12:15.160420  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.173484  392787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:12:15.173555  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.186345  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.200092  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.211657  392787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:12:15.223390  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.235199  392787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.254654  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.266113  392787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:12:15.276891  392787 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 18:12:15.277002  392787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 18:12:15.291279  392787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:12:15.301766  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:12:15.417275  392787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:12:15.521133  392787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:12:15.521217  392787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:12:15.526494  392787 start.go:563] Will wait 60s for crictl version
	I0916 18:12:15.526576  392787 ssh_runner.go:195] Run: which crictl
	I0916 18:12:15.530531  392787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:12:15.574054  392787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:12:15.574153  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:12:15.603221  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:12:15.634208  392787 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:12:15.635691  392787 out.go:177]   - env NO_PROXY=192.168.39.165
	I0916 18:12:15.637183  392787 out.go:177]   - env NO_PROXY=192.168.39.165,192.168.39.18
	I0916 18:12:15.638493  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:15.641228  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:15.641576  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:15.641606  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:15.641841  392787 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:12:15.646120  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:12:15.659858  392787 mustload.go:65] Loading cluster: ha-365438
	I0916 18:12:15.660161  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:12:15.660526  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:12:15.660592  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:12:15.676323  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0916 18:12:15.676844  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:12:15.677362  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:12:15.677397  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:12:15.677786  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:12:15.677968  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:12:15.679484  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:12:15.679783  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:12:15.679823  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:12:15.696055  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I0916 18:12:15.696528  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:12:15.697061  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:12:15.697081  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:12:15.697427  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:12:15.697663  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:12:15.697844  392787 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.231
	I0916 18:12:15.697856  392787 certs.go:194] generating shared ca certs ...
	I0916 18:12:15.697875  392787 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:12:15.698039  392787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:12:15.698100  392787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:12:15.698113  392787 certs.go:256] generating profile certs ...
	I0916 18:12:15.698220  392787 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:12:15.698250  392787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c
	I0916 18:12:15.698275  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.18 192.168.39.231 192.168.39.254]
	I0916 18:12:15.780429  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c ...
	I0916 18:12:15.780465  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c: {Name:mk92bfd88419c53d2051fea6e814cf12a8ab551b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:12:15.780648  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c ...
	I0916 18:12:15.780660  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c: {Name:mk93d7a277a030e4c0050a92c3af54e7af5dd6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:12:15.780749  392787 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:12:15.780891  392787 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:12:15.781064  392787 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:12:15.781082  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:12:15.781096  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:12:15.781109  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:12:15.781122  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:12:15.781137  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:12:15.781149  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:12:15.781161  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:12:15.801031  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:12:15.801129  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:12:15.801166  392787 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:12:15.801176  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:12:15.801199  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:12:15.801223  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:12:15.801245  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:12:15.801286  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:12:15.801315  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:15.801336  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:12:15.801351  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:12:15.801389  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:12:15.804809  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:15.805305  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:12:15.805349  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:15.805590  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:12:15.805809  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:12:15.805990  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:12:15.806169  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:12:15.885366  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 18:12:15.891763  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 18:12:15.904394  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 18:12:15.909199  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 18:12:15.921290  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 18:12:15.926248  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 18:12:15.937817  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 18:12:15.942446  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 18:12:15.954821  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 18:12:15.960948  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 18:12:15.972262  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 18:12:15.976972  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 18:12:15.989284  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:12:16.017611  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:12:16.044622  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:12:16.074799  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:12:16.101738  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 18:12:16.128149  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 18:12:16.156672  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:12:16.184029  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:12:16.211535  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:12:16.239282  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:12:16.265500  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:12:16.291138  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 18:12:16.310218  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 18:12:16.328559  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 18:12:16.345749  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 18:12:16.363416  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 18:12:16.381315  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 18:12:16.398951  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 18:12:16.417890  392787 ssh_runner.go:195] Run: openssl version
	I0916 18:12:16.423972  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:12:16.435476  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:16.440311  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:16.440406  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:16.446585  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:12:16.459570  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:12:16.471240  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:12:16.476073  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:12:16.476170  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:12:16.482297  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:12:16.495192  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:12:16.506688  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:12:16.511325  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:12:16.511392  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:12:16.517616  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:12:16.529378  392787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:12:16.533743  392787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 18:12:16.533806  392787 kubeadm.go:934] updating node {m03 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0916 18:12:16.533904  392787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:12:16.533930  392787 kube-vip.go:115] generating kube-vip config ...
	I0916 18:12:16.533973  392787 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:12:16.550457  392787 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:12:16.550538  392787 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:12:16.550597  392787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:12:16.561170  392787 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 18:12:16.561251  392787 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 18:12:16.571686  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 18:12:16.571687  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 18:12:16.571691  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 18:12:16.571743  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:12:16.571782  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:12:16.571804  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:12:16.571727  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:12:16.571885  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:12:16.590503  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:12:16.590537  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 18:12:16.590565  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 18:12:16.590504  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 18:12:16.590613  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:12:16.590612  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 18:12:16.617764  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 18:12:16.617812  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 18:12:17.546617  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 18:12:17.556651  392787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 18:12:17.574637  392787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:12:17.594355  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 18:12:17.611343  392787 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:12:17.615500  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:12:17.628546  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:12:17.765111  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:12:17.785384  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:12:17.785722  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:12:17.785763  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:12:17.801417  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0916 18:12:17.801922  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:12:17.802503  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:12:17.802528  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:12:17.802875  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:12:17.803094  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:12:17.803262  392787 start.go:317] joinCluster: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:12:17.803394  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 18:12:17.803411  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:12:17.806440  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:17.806874  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:12:17.806904  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:17.807028  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:12:17.807213  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:12:17.807369  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:12:17.807511  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:12:17.973906  392787 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:12:17.973976  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7t40zy.1tbwssoyalawrr0f --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m03 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443"
	I0916 18:12:40.902025  392787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7t40zy.1tbwssoyalawrr0f --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m03 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443": (22.928017131s)
	I0916 18:12:40.902078  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 18:12:41.483205  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-365438-m03 minikube.k8s.io/updated_at=2024_09_16T18_12_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=ha-365438 minikube.k8s.io/primary=false
	I0916 18:12:41.627686  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-365438-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 18:12:41.741188  392787 start.go:319] duration metric: took 23.937923236s to joinCluster
	I0916 18:12:41.741277  392787 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:12:41.741618  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:12:41.742659  392787 out.go:177] * Verifying Kubernetes components...
	I0916 18:12:41.744104  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:12:42.052755  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:12:42.081527  392787 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:12:42.081873  392787 kapi.go:59] client config for ha-365438: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt", KeyFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key", CAFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 18:12:42.081981  392787 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0916 18:12:42.082316  392787 node_ready.go:35] waiting up to 6m0s for node "ha-365438-m03" to be "Ready" ...
	I0916 18:12:42.082430  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:42.082445  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:42.082456  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:42.082461  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:42.085771  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:42.582782  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:42.582816  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:42.582828  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:42.582836  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:42.586265  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:43.082552  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:43.082576  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:43.082584  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:43.082588  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:43.086205  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:43.583150  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:43.583180  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:43.583192  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:43.583199  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:43.587018  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:44.083045  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:44.083067  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:44.083076  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:44.083080  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:44.087110  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:12:44.087820  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:44.583130  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:44.583156  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:44.583168  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:44.583174  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:44.586276  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:45.083344  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:45.083374  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:45.083386  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:45.083391  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:45.086404  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:45.583428  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:45.583454  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:45.583466  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:45.583471  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:45.586835  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:46.083067  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:46.083098  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:46.083109  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:46.083117  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:46.086876  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:46.583356  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:46.583383  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:46.583395  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:46.583408  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:46.586623  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:46.587362  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:47.082628  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:47.082655  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:47.082664  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:47.082667  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:47.086030  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:47.583300  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:47.583325  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:47.583339  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:47.583343  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:47.587136  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:48.083231  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:48.083253  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:48.083261  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:48.083266  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:48.086866  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:48.583216  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:48.583252  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:48.583274  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:48.583283  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:48.590890  392787 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 18:12:48.591473  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:49.082703  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:49.082727  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:49.082736  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:49.082741  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:49.086644  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:49.583567  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:49.583597  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:49.583606  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:49.583611  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:49.586911  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:50.083340  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:50.083362  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:50.083370  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:50.083374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:50.088634  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:12:50.583515  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:50.583540  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:50.583548  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:50.583552  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:50.587120  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:51.083268  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:51.083301  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:51.083311  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:51.083316  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:51.086864  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:51.087409  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:51.582749  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:51.582775  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:51.582786  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:51.582790  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:51.586554  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:52.083588  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:52.083617  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:52.083627  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:52.083632  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:52.087058  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:52.582622  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:52.582645  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:52.582659  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:52.582664  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:52.586165  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:53.083036  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:53.083059  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:53.083067  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:53.083072  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:53.086494  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:53.583553  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:53.583578  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:53.583589  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:53.583593  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:53.587308  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:53.587970  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:54.083329  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:54.083354  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:54.083364  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:54.083369  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:54.088254  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:12:54.583186  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:54.583210  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:54.583219  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:54.583223  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:54.586894  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:55.082776  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:55.082801  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:55.082810  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:55.082815  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:55.086315  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:55.583567  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:55.583591  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:55.583600  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:55.583609  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:55.587584  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:55.588203  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:56.082815  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:56.082839  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:56.082848  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:56.082853  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:56.086288  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:56.583253  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:56.583281  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:56.583293  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:56.583299  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:56.588417  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:12:57.083394  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:57.083418  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:57.083427  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:57.083432  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:57.086909  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:57.582896  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:57.582927  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:57.582939  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:57.582945  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:57.586090  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:58.082726  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:58.082755  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:58.082767  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:58.082774  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:58.086171  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:58.086896  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:58.583401  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:58.583431  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:58.583444  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:58.583454  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:58.587059  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.083306  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.083332  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.083339  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.083343  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.086672  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.582558  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.582582  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.582594  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.582597  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.585909  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.586744  392787 node_ready.go:49] node "ha-365438-m03" has status "Ready":"True"
	I0916 18:12:59.586772  392787 node_ready.go:38] duration metric: took 17.504427469s for node "ha-365438-m03" to be "Ready" ...
	I0916 18:12:59.586785  392787 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:12:59.586883  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:12:59.586894  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.586905  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.586910  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.595755  392787 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 18:12:59.603593  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.603688  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9svk8
	I0916 18:12:59.603697  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.603705  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.603709  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.606559  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.607268  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:12:59.607287  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.607298  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.607303  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.610335  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.610779  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.610795  392787 pod_ready.go:82] duration metric: took 7.175735ms for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.610806  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.610866  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zh7sm
	I0916 18:12:59.610876  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.610886  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.610892  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.613726  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.614410  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:12:59.614427  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.614437  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.614442  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.616779  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.617280  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.617301  392787 pod_ready.go:82] duration metric: took 6.486836ms for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.617312  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.617370  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438
	I0916 18:12:59.617381  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.617390  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.617399  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.619864  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.620558  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:12:59.620570  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.620577  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.620583  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.622783  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.623203  392787 pod_ready.go:93] pod "etcd-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.623224  392787 pod_ready.go:82] duration metric: took 5.904153ms for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.623245  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.623309  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m02
	I0916 18:12:59.623318  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.623324  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.623328  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.625871  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.626349  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:12:59.626363  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.626369  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.626374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.628395  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.628890  392787 pod_ready.go:93] pod "etcd-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.628908  392787 pod_ready.go:82] duration metric: took 5.653837ms for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.628927  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.783340  392787 request.go:632] Waited for 154.329904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m03
	I0916 18:12:59.783420  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m03
	I0916 18:12:59.783428  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.783467  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.783478  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.787297  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.983422  392787 request.go:632] Waited for 195.400533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.983530  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.983547  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.983559  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.983590  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.986759  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.987515  392787 pod_ready.go:93] pod "etcd-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.987534  392787 pod_ready.go:82] duration metric: took 358.598974ms for pod "etcd-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.987549  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.182889  392787 request.go:632] Waited for 195.23344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:13:00.182952  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:13:00.182957  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.182964  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.182968  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.186893  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:00.383215  392787 request.go:632] Waited for 195.432707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:00.383276  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:00.383281  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.383289  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.383292  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.386737  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:00.387448  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:00.387468  392787 pod_ready.go:82] duration metric: took 399.91301ms for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.387478  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.583590  392787 request.go:632] Waited for 196.029732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:13:00.583676  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:13:00.583683  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.583694  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.583704  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.587274  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:00.782762  392787 request.go:632] Waited for 194.162407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:00.782860  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:00.782871  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.782883  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.782891  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.792088  392787 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 18:13:00.792847  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:00.792883  392787 pod_ready.go:82] duration metric: took 405.39653ms for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.792896  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.983091  392787 request.go:632] Waited for 190.084396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m03
	I0916 18:13:00.983174  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m03
	I0916 18:13:00.983181  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.983189  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.983196  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.987131  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.183425  392787 request.go:632] Waited for 195.419999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:01.183487  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:01.183492  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.183499  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.183502  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.188515  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:01.189086  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:01.189113  392787 pod_ready.go:82] duration metric: took 396.209012ms for pod "kube-apiserver-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.189129  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.383062  392787 request.go:632] Waited for 193.84647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:13:01.383169  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:13:01.383179  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.383187  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.383191  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.386966  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.582997  392787 request.go:632] Waited for 195.374257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:01.583079  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:01.583088  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.583100  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.583109  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.587144  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:01.587995  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:01.588022  392787 pod_ready.go:82] duration metric: took 398.882515ms for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.588035  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.783048  392787 request.go:632] Waited for 194.906609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:13:01.783143  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:13:01.783150  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.783158  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.783168  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.786633  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.982698  392787 request.go:632] Waited for 194.578986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:01.982779  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:01.982791  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.982801  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.982808  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.986249  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.986974  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:01.986999  392787 pod_ready.go:82] duration metric: took 398.955367ms for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.987013  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.183061  392787 request.go:632] Waited for 195.922884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m03
	I0916 18:13:02.183155  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m03
	I0916 18:13:02.183167  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.183180  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.183189  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.187631  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:02.383575  392787 request.go:632] Waited for 195.153908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:02.383651  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:02.383657  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.383666  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.383670  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.387023  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:02.387737  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:02.387762  392787 pod_ready.go:82] duration metric: took 400.741572ms for pod "kube-controller-manager-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.387772  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.582850  392787 request.go:632] Waited for 194.977586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:13:02.582926  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:13:02.582935  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.582946  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.582956  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.586646  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:02.783544  392787 request.go:632] Waited for 196.229158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:02.783626  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:02.783631  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.783639  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.783643  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.787351  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:02.787936  392787 pod_ready.go:93] pod "kube-proxy-4rfbj" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:02.787957  392787 pod_ready.go:82] duration metric: took 400.175389ms for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.787967  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mjljp" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.982749  392787 request.go:632] Waited for 194.672685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjljp
	I0916 18:13:02.982827  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjljp
	I0916 18:13:02.982835  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.982843  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.982849  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.986721  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.182765  392787 request.go:632] Waited for 195.290403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:03.182853  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:03.182859  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.182868  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.182871  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.187284  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:03.188075  392787 pod_ready.go:93] pod "kube-proxy-mjljp" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:03.188101  392787 pod_ready.go:82] duration metric: took 400.127597ms for pod "kube-proxy-mjljp" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.188115  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.383188  392787 request.go:632] Waited for 194.985677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:13:03.383283  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:13:03.383294  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.383305  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.383311  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.387031  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.583305  392787 request.go:632] Waited for 195.368535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:03.583374  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:03.583382  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.583392  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.583399  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.587275  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.587830  392787 pod_ready.go:93] pod "kube-proxy-nrqvf" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:03.587854  392787 pod_ready.go:82] duration metric: took 399.726525ms for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.587866  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.782909  392787 request.go:632] Waited for 194.941802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:13:03.782977  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:13:03.782984  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.782994  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.783000  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.786673  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.982600  392787 request.go:632] Waited for 195.308926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:03.982664  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:03.982669  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.982676  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.982681  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.985742  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.986404  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:03.986422  392787 pod_ready.go:82] duration metric: took 398.54947ms for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.986432  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.183531  392787 request.go:632] Waited for 197.004679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:13:04.183623  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:13:04.183634  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.183646  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.183656  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.188127  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:04.383008  392787 request.go:632] Waited for 194.245966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:04.383084  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:04.383091  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.383101  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.383115  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.390859  392787 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 18:13:04.391350  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:04.391370  392787 pod_ready.go:82] duration metric: took 404.930794ms for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.391379  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.583595  392787 request.go:632] Waited for 192.100389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m03
	I0916 18:13:04.583657  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m03
	I0916 18:13:04.583663  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.583671  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.583675  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.587702  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:04.783641  392787 request.go:632] Waited for 195.346085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:04.783704  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:04.783712  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.783722  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.783731  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.787824  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:04.788304  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:04.788324  392787 pod_ready.go:82] duration metric: took 396.938315ms for pod "kube-scheduler-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.788335  392787 pod_ready.go:39] duration metric: took 5.201535788s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:13:04.788352  392787 api_server.go:52] waiting for apiserver process to appear ...
	I0916 18:13:04.788407  392787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:13:04.804417  392787 api_server.go:72] duration metric: took 23.063094336s to wait for apiserver process to appear ...
	I0916 18:13:04.804447  392787 api_server.go:88] waiting for apiserver healthz status ...
	I0916 18:13:04.804469  392787 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0916 18:13:04.809550  392787 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0916 18:13:04.809652  392787 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0916 18:13:04.809661  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.809670  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.809678  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.810883  392787 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 18:13:04.810953  392787 api_server.go:141] control plane version: v1.31.1
	I0916 18:13:04.810969  392787 api_server.go:131] duration metric: took 6.515714ms to wait for apiserver health ...
	I0916 18:13:04.810977  392787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 18:13:04.983408  392787 request.go:632] Waited for 172.33212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:04.983479  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:04.983486  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.983497  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.983507  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.990262  392787 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 18:13:04.997971  392787 system_pods.go:59] 24 kube-system pods found
	I0916 18:13:04.998002  392787 system_pods.go:61] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:13:04.998007  392787 system_pods.go:61] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:13:04.998012  392787 system_pods.go:61] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:13:04.998015  392787 system_pods.go:61] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:13:04.998019  392787 system_pods.go:61] "etcd-ha-365438-m03" [45ddb461-9dd3-427f-a452-5877e0d64c70] Running
	I0916 18:13:04.998022  392787 system_pods.go:61] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:13:04.998025  392787 system_pods.go:61] "kindnet-99gkn" [10d5b9d6-42b5-4e43-9338-9af09c16e31d] Running
	I0916 18:13:04.998028  392787 system_pods.go:61] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:13:04.998032  392787 system_pods.go:61] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:13:04.998035  392787 system_pods.go:61] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:13:04.998038  392787 system_pods.go:61] "kube-apiserver-ha-365438-m03" [d0d96b4f-e681-41c0-9880-1b08a79dae8b] Running
	I0916 18:13:04.998041  392787 system_pods.go:61] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:13:04.998045  392787 system_pods.go:61] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:13:04.998051  392787 system_pods.go:61] "kube-controller-manager-ha-365438-m03" [d66ec66c-bcb2-406c-bce2-b9fa2e926a94] Running
	I0916 18:13:04.998056  392787 system_pods.go:61] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:13:04.998062  392787 system_pods.go:61] "kube-proxy-mjljp" [796ffc54-f5ab-4475-a94b-f1b5c0e3b016] Running
	I0916 18:13:04.998067  392787 system_pods.go:61] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:13:04.998072  392787 system_pods.go:61] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:13:04.998080  392787 system_pods.go:61] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:13:04.998085  392787 system_pods.go:61] "kube-scheduler-ha-365438-m03" [3128b7cd-6481-4cf0-90bd-848a297928ae] Running
	I0916 18:13:04.998088  392787 system_pods.go:61] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:13:04.998091  392787 system_pods.go:61] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:13:04.998094  392787 system_pods.go:61] "kube-vip-ha-365438-m03" [a9526f41-9953-4e9a-848b-ffe4f138550b] Running
	I0916 18:13:04.998097  392787 system_pods.go:61] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:13:04.998103  392787 system_pods.go:74] duration metric: took 187.120656ms to wait for pod list to return data ...
	I0916 18:13:04.998115  392787 default_sa.go:34] waiting for default service account to be created ...
	I0916 18:13:05.183572  392787 request.go:632] Waited for 185.361206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:13:05.183653  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:13:05.183664  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:05.183674  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:05.183684  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:05.189857  392787 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 18:13:05.190030  392787 default_sa.go:45] found service account: "default"
	I0916 18:13:05.190056  392787 default_sa.go:55] duration metric: took 191.933191ms for default service account to be created ...
	I0916 18:13:05.190067  392787 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 18:13:05.383549  392787 request.go:632] Waited for 193.39071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:05.383624  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:05.383631  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:05.383641  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:05.383652  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:05.389473  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:13:05.396913  392787 system_pods.go:86] 24 kube-system pods found
	I0916 18:13:05.396965  392787 system_pods.go:89] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:13:05.396972  392787 system_pods.go:89] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:13:05.396976  392787 system_pods.go:89] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:13:05.396981  392787 system_pods.go:89] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:13:05.396984  392787 system_pods.go:89] "etcd-ha-365438-m03" [45ddb461-9dd3-427f-a452-5877e0d64c70] Running
	I0916 18:13:05.396988  392787 system_pods.go:89] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:13:05.396991  392787 system_pods.go:89] "kindnet-99gkn" [10d5b9d6-42b5-4e43-9338-9af09c16e31d] Running
	I0916 18:13:05.396995  392787 system_pods.go:89] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:13:05.396999  392787 system_pods.go:89] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:13:05.397003  392787 system_pods.go:89] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:13:05.397007  392787 system_pods.go:89] "kube-apiserver-ha-365438-m03" [d0d96b4f-e681-41c0-9880-1b08a79dae8b] Running
	I0916 18:13:05.397010  392787 system_pods.go:89] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:13:05.397014  392787 system_pods.go:89] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:13:05.397020  392787 system_pods.go:89] "kube-controller-manager-ha-365438-m03" [d66ec66c-bcb2-406c-bce2-b9fa2e926a94] Running
	I0916 18:13:05.397027  392787 system_pods.go:89] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:13:05.397031  392787 system_pods.go:89] "kube-proxy-mjljp" [796ffc54-f5ab-4475-a94b-f1b5c0e3b016] Running
	I0916 18:13:05.397037  392787 system_pods.go:89] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:13:05.397041  392787 system_pods.go:89] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:13:05.397047  392787 system_pods.go:89] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:13:05.397051  392787 system_pods.go:89] "kube-scheduler-ha-365438-m03" [3128b7cd-6481-4cf0-90bd-848a297928ae] Running
	I0916 18:13:05.397057  392787 system_pods.go:89] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:13:05.397060  392787 system_pods.go:89] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:13:05.397066  392787 system_pods.go:89] "kube-vip-ha-365438-m03" [a9526f41-9953-4e9a-848b-ffe4f138550b] Running
	I0916 18:13:05.397069  392787 system_pods.go:89] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:13:05.397077  392787 system_pods.go:126] duration metric: took 207.003058ms to wait for k8s-apps to be running ...
	I0916 18:13:05.397086  392787 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 18:13:05.397134  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:13:05.413583  392787 system_svc.go:56] duration metric: took 16.48209ms WaitForService to wait for kubelet
	I0916 18:13:05.413618  392787 kubeadm.go:582] duration metric: took 23.672302076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:13:05.413636  392787 node_conditions.go:102] verifying NodePressure condition ...
	I0916 18:13:05.583122  392787 request.go:632] Waited for 169.380554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0916 18:13:05.583181  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0916 18:13:05.583186  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:05.583193  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:05.583205  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:05.587044  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:05.587993  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:13:05.588017  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:13:05.588032  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:13:05.588037  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:13:05.588042  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:13:05.588046  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:13:05.588052  392787 node_conditions.go:105] duration metric: took 174.411005ms to run NodePressure ...
	I0916 18:13:05.588067  392787 start.go:241] waiting for startup goroutines ...
	I0916 18:13:05.588095  392787 start.go:255] writing updated cluster config ...
	I0916 18:13:05.588448  392787 ssh_runner.go:195] Run: rm -f paused
	I0916 18:13:05.639929  392787 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 18:13:05.642140  392787 out.go:177] * Done! kubectl is now configured to use "ha-365438" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.931039650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510606931015894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16722e0d-4be0-4a63-94de-878450aeda4b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.931566276Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3d06cf3-3c8b-4297-9a65-ae8ebfa60328 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.931637857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3d06cf3-3c8b-4297-9a65-ae8ebfa60328 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.931859868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3d06cf3-3c8b-4297-9a65-ae8ebfa60328 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.972203544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5418ec4b-2f4f-46f5-bc44-5259fc227c59 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.972301672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5418ec4b-2f4f-46f5-bc44-5259fc227c59 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.973172187Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=812d0ec2-cda6-459c-b6bd-3ffad10c6f9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.973850416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510606973824556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=812d0ec2-cda6-459c-b6bd-3ffad10c6f9c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.974322981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=014528ea-6459-4f54-9dd4-82f83f958ed6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.974395210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=014528ea-6459-4f54-9dd4-82f83f958ed6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:46 ha-365438 crio[665]: time="2024-09-16 18:16:46.974704062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=014528ea-6459-4f54-9dd4-82f83f958ed6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.018572970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f0ff38f-3390-40e2-abb6-c3c166e11090 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.018666914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f0ff38f-3390-40e2-abb6-c3c166e11090 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.019712434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=584787b0-c9c2-4b94-9c18-d8d24ce89959 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.020128542Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510607020108397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=584787b0-c9c2-4b94-9c18-d8d24ce89959 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.020673272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06b86f2b-4bc3-49ab-a929-f3198e6d1122 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.020743805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06b86f2b-4bc3-49ab-a929-f3198e6d1122 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.020957481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06b86f2b-4bc3-49ab-a929-f3198e6d1122 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.068767438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8ba2774-8dba-4ff7-8a3e-c876608aa3c7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.068858517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8ba2774-8dba-4ff7-8a3e-c876608aa3c7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.070661017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51a07a53-fb0a-40d8-80e2-4bec413f5401 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.071102572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510607071080487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51a07a53-fb0a-40d8-80e2-4bec413f5401 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.071826750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34e996c6-46d2-487f-8068-defe0e0c4d06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.071899406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34e996c6-46d2-487f-8068-defe0e0c4d06 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:16:47 ha-365438 crio[665]: time="2024-09-16 18:16:47.072139915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34e996c6-46d2-487f-8068-defe0e0c4d06 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c688c47b509b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   45427fea44b56       busybox-7dff88458-8lxm5
	637415283f8f3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   fe46e69c89ef4       coredns-7c65d6cfc9-9svk8
	6d39f7ccc716d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ea14225ed4b22       storage-provisioner
	cc48bfbff79f1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f9e31847522a4       coredns-7c65d6cfc9-zh7sm
	ae842d37f79ef       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   16b1b97f4eee2       kindnet-599gk
	fced6ce81805e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   c7bb352443d32       kube-proxy-4rfbj
	bdc152e65d13d       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   1832b99e80b46       kube-vip-ha-365438
	4afcf5ad24d43       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4415d47ee85c8       kube-scheduler-ha-365438
	c88b73102e4d2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   96b362b092e85       kube-controller-manager-ha-365438
	ee90a7de312ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   265048ac4715e       etcd-ha-365438
	36d26d8df5e6b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   391ae22fdb2ee       kube-apiserver-ha-365438
	
	
	==> coredns [637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d] <==
	[INFO] 10.244.0.4:55046 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001617632s
	[INFO] 10.244.1.2:47379 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000105448s
	[INFO] 10.244.2.2:51760 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003638819s
	[INFO] 10.244.2.2:46488 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003037697s
	[INFO] 10.244.2.2:44401 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139379s
	[INFO] 10.244.2.2:56173 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112442s
	[INFO] 10.244.0.4:32857 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002069592s
	[INFO] 10.244.0.4:35029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155002s
	[INFO] 10.244.0.4:49666 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167499s
	[INFO] 10.244.0.4:41304 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152323s
	[INFO] 10.244.0.4:41961 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100019s
	[INFO] 10.244.1.2:48555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001727554s
	[INFO] 10.244.1.2:37688 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206956s
	[INFO] 10.244.1.2:44275 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110571s
	[INFO] 10.244.1.2:37001 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093275s
	[INFO] 10.244.1.2:57811 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112973s
	[INFO] 10.244.2.2:55064 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213698s
	[INFO] 10.244.0.4:37672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109862s
	[INFO] 10.244.0.4:45703 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118782s
	[INFO] 10.244.1.2:52420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150103s
	[INFO] 10.244.2.2:52865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149526s
	[INFO] 10.244.0.4:44130 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119328s
	[INFO] 10.244.0.4:51235 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014834s
	[INFO] 10.244.1.2:43653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000142634s
	[INFO] 10.244.1.2:57111 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010624s
	
	
	==> coredns [cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61] <==
	[INFO] 10.244.2.2:53710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142911s
	[INFO] 10.244.2.2:58300 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214269s
	[INFO] 10.244.2.2:58024 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167123s
	[INFO] 10.244.2.2:45004 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159707s
	[INFO] 10.244.0.4:52424 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109638s
	[INFO] 10.244.0.4:57524 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000203423s
	[INFO] 10.244.0.4:54948 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001771392s
	[INFO] 10.244.1.2:42603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150119s
	[INFO] 10.244.1.2:33836 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012192s
	[INFO] 10.244.1.2:43769 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001330856s
	[INFO] 10.244.2.2:36423 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180227s
	[INFO] 10.244.2.2:37438 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200625s
	[INFO] 10.244.2.2:51918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177109s
	[INFO] 10.244.0.4:40286 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009687s
	[INFO] 10.244.0.4:48298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090197s
	[INFO] 10.244.1.2:55488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228564s
	[INFO] 10.244.1.2:56818 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000347056s
	[INFO] 10.244.1.2:48235 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199345s
	[INFO] 10.244.2.2:47702 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156299s
	[INFO] 10.244.2.2:56845 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193247s
	[INFO] 10.244.2.2:51347 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151041s
	[INFO] 10.244.0.4:52543 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181864s
	[INFO] 10.244.0.4:60962 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097958s
	[INFO] 10.244.1.2:48543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201712s
	[INFO] 10.244.1.2:47958 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011421s
	
	
	==> describe nodes <==
	Name:               ha-365438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T18_10_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:10:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:16:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-365438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 428a6b3869674553b5fa368f548d44fe
	  System UUID:                428a6b38-6967-4553-b5fa-368f548d44fe
	  Boot ID:                    bf6a145c-4c83-434e-832f-5377ceb5d93e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8lxm5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 coredns-7c65d6cfc9-9svk8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 coredns-7c65d6cfc9-zh7sm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 etcd-ha-365438                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m20s
	  kube-system                 kindnet-599gk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-365438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-365438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-4rfbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-365438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-365438                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m27s (x7 over 6m27s)  kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m27s (x8 over 6m27s)  kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s (x8 over 6m27s)  kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s                  kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s                  kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s                  kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal  NodeReady                6m3s                   kubelet          Node ha-365438 status is now: NodeReady
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	
	
	Name:               ha-365438-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_11_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:11:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:14:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-365438-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37dacc83603e40abb19ac133e9d2c030
	  System UUID:                37dacc83-603e-40ab-b19a-c133e9d2c030
	  Boot ID:                    5550a2cd-442d-4fc2-aaf0-b6d4f273236b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8whmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-365438-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m23s
	  kube-system                 kindnet-q2vlq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m25s
	  kube-system                 kube-apiserver-ha-365438-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-controller-manager-ha-365438-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-proxy-nrqvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-scheduler-ha-365438-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-vip-ha-365438-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-365438-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m22s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  NodeNotReady             101s                   node-controller  Node ha-365438-m02 status is now: NodeNotReady
	
	
	Name:               ha-365438-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_12_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:12:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:16:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-365438-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 113546b28b1a45aca3d715558877ace5
	  System UUID:                113546b2-8b1a-45ac-a3d7-15558877ace5
	  Boot ID:                    57192e71-e2a9-47b0-8ee4-d31dbab88507
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4hs24                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-365438-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-99gkn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-365438-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-controller-manager-ha-365438-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-mjljp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-365438-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-365438-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m5s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-365438-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	
	
	Name:               ha-365438-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_13_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:16:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:13:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:13:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:13:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:14:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-365438-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a60d15c35e49c89cf5c86d6e9e7127
	  System UUID:                19a60d15-c35e-49c8-9cf5-c86d6e9e7127
	  Boot ID:                    53999728-4b75-46ca-92fe-01082b4d22f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gjxct       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-pln82    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-365438-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-365438-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 18:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050391] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.824360] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556090] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep16 18:10] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.575440] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073590] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.211872] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.138357] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.296917] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.171159] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.216894] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.069713] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.331842] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.083288] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.257891] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.535666] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 18:11] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce] <==
	{"level":"warn","ts":"2024-09-16T18:16:47.351935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.362293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.369436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.388089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.398760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.407199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.412117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.416656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.424010Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.431672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.439552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.443346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.444607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.447842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.453688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.460362Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.467628Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.471348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.474672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.482121Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.489456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.495895Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.504926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.518354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:16:47.544594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:16:47 up 6 min,  0 users,  load average: 0.11, 0.22, 0.12
	Linux ha-365438 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d] <==
	I0916 18:16:13.857262       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:16:23.854938       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:16:23.855118       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:16:23.855350       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:16:23.855391       1 main.go:299] handling current node
	I0916 18:16:23.855428       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:16:23.855446       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:16:23.855618       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:16:23.855647       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:16:33.848847       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:16:33.848971       1 main.go:299] handling current node
	I0916 18:16:33.849015       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:16:33.849044       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:16:33.849302       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:16:33.849361       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:16:33.849536       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:16:33.849578       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:16:43.855696       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:16:43.855744       1 main.go:299] handling current node
	I0916 18:16:43.855758       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:16:43.855763       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:16:43.855870       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:16:43.855893       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:16:43.855961       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:16:43.855989       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6] <==
	I0916 18:10:27.244394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 18:10:27.265179       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 18:10:27.280369       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 18:10:31.798242       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 18:10:31.864144       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0916 18:12:38.928935       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0916 18:12:38.929374       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 12.563µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0916 18:12:38.930789       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0916 18:12:38.932083       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0916 18:12:38.933386       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.58541ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0916 18:13:11.410930       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54582: use of closed network connection
	E0916 18:13:11.619705       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54610: use of closed network connection
	E0916 18:13:11.816958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54624: use of closed network connection
	E0916 18:13:12.024707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54642: use of closed network connection
	E0916 18:13:12.233572       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54662: use of closed network connection
	E0916 18:13:12.435652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54686: use of closed network connection
	E0916 18:13:12.626780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	E0916 18:13:12.810687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54738: use of closed network connection
	E0916 18:13:12.995777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54752: use of closed network connection
	E0916 18:13:13.307726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0916 18:13:13.490522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54826: use of closed network connection
	E0916 18:13:13.683198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54842: use of closed network connection
	E0916 18:13:13.864048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54866: use of closed network connection
	E0916 18:13:14.060925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54886: use of closed network connection
	E0916 18:13:14.258968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54898: use of closed network connection
	
	
	==> kube-controller-manager [c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486] <==
	I0916 18:13:44.924449       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-365438-m04" podCIDRs=["10.244.3.0/24"]
	I0916 18:13:44.924624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:44.927263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:44.949065       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:45.139259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:45.548969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:45.871445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.024001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.024282       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-365438-m04"
	I0916 18:13:46.052316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.394397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.424081       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:55.156809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:05.787818       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	I0916 18:14:05.788815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:05.809341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:06.044684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:15.558924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:15:06.071107       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	I0916 18:15:06.071770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:15:06.095977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:15:06.241653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="90.877135ms"
	I0916 18:15:06.241802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.468µs"
	I0916 18:15:06.443579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:15:11.332301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	
	
	==> kube-proxy [fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:10:32.987622       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 18:10:33.018026       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E0916 18:10:33.018217       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:10:33.098891       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:10:33.098934       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:10:33.098958       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:10:33.106834       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:10:33.107514       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:10:33.107530       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:10:33.111336       1 config.go:199] "Starting service config controller"
	I0916 18:10:33.113381       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:10:33.114077       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:10:33.115158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:10:33.118847       1 config.go:328] "Starting node config controller"
	I0916 18:10:33.118884       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:10:33.218918       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:10:33.218989       1 shared_informer.go:320] Caches are synced for node config
	I0916 18:10:33.219044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804] <==
	E0916 18:10:25.314834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.381719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 18:10:25.381817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.390706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 18:10:25.390815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.432183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 18:10:25.432289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.436726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 18:10:25.436823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.529314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 18:10:25.529380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 18:10:27.404687       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 18:12:38.213573       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-99gkn\": pod kindnet-99gkn is already assigned to node \"ha-365438-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-99gkn" node="ha-365438-m03"
	E0916 18:12:38.216530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 10d5b9d6-42b5-4e43-9338-9af09c16e31d(kube-system/kindnet-99gkn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-99gkn"
	E0916 18:12:38.217004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-99gkn\": pod kindnet-99gkn is already assigned to node \"ha-365438-m03\"" pod="kube-system/kindnet-99gkn"
	I0916 18:12:38.217215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-99gkn" node="ha-365438-m03"
	I0916 18:13:06.562653       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="f2ef0616-2379-49c3-af53-b3779fb4448f" pod="default/busybox-7dff88458-4hs24" assumedNode="ha-365438-m03" currentNode="ha-365438-m02"
	E0916 18:13:06.587442       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4hs24" node="ha-365438-m02"
	E0916 18:13:06.587523       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f2ef0616-2379-49c3-af53-b3779fb4448f(default/busybox-7dff88458-4hs24) was assumed on ha-365438-m02 but assigned to ha-365438-m03" pod="default/busybox-7dff88458-4hs24"
	E0916 18:13:06.587555       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" pod="default/busybox-7dff88458-4hs24"
	I0916 18:13:06.587578       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4hs24" node="ha-365438-m03"
	E0916 18:13:06.618090       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	E0916 18:13:06.618528       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 11bd1f64-d695-4fc7-bec9-5694a7552fdf(default/busybox-7dff88458-8whmx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8whmx"
	E0916 18:13:06.618607       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" pod="default/busybox-7dff88458-8whmx"
	I0916 18:13:06.618663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	
	
	==> kubelet <==
	Sep 16 18:15:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:15:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:15:27 ha-365438 kubelet[1307]: E0916 18:15:27.311355    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510527310977425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:27 ha-365438 kubelet[1307]: E0916 18:15:27.311384    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510527310977425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:37 ha-365438 kubelet[1307]: E0916 18:15:37.313902    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510537313431789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:37 ha-365438 kubelet[1307]: E0916 18:15:37.313930    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510537313431789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:47 ha-365438 kubelet[1307]: E0916 18:15:47.315733    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510547315227722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:47 ha-365438 kubelet[1307]: E0916 18:15:47.316120    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510547315227722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:57 ha-365438 kubelet[1307]: E0916 18:15:57.318235    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510557317217001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:15:57 ha-365438 kubelet[1307]: E0916 18:15:57.319147    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510557317217001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:07 ha-365438 kubelet[1307]: E0916 18:16:07.321000    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510567320692007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:07 ha-365438 kubelet[1307]: E0916 18:16:07.321035    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510567320692007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:17 ha-365438 kubelet[1307]: E0916 18:16:17.323712    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510577322623231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:17 ha-365438 kubelet[1307]: E0916 18:16:17.323743    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510577322623231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:27 ha-365438 kubelet[1307]: E0916 18:16:27.254054    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:16:27 ha-365438 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:16:27 ha-365438 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:16:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:16:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:16:27 ha-365438 kubelet[1307]: E0916 18:16:27.327272    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510587326206067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:27 ha-365438 kubelet[1307]: E0916 18:16:27.327317    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510587326206067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:37 ha-365438 kubelet[1307]: E0916 18:16:37.329202    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510597328725522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:37 ha-365438 kubelet[1307]: E0916 18:16:37.329562    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510597328725522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:47 ha-365438 kubelet[1307]: E0916 18:16:47.331913    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510607331315869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:47 ha-365438 kubelet[1307]: E0916 18:16:47.331941    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510607331315869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-365438 -n ha-365438
helpers_test.go:261: (dbg) Run:  kubectl --context ha-365438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (3.216417677s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:16:52.126915  397599 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:16:52.127030  397599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:16:52.127035  397599 out.go:358] Setting ErrFile to fd 2...
	I0916 18:16:52.127040  397599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:16:52.127246  397599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:16:52.127624  397599 out.go:352] Setting JSON to false
	I0916 18:16:52.127665  397599 mustload.go:65] Loading cluster: ha-365438
	I0916 18:16:52.127705  397599 notify.go:220] Checking for updates...
	I0916 18:16:52.128138  397599 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:16:52.128155  397599 status.go:255] checking status of ha-365438 ...
	I0916 18:16:52.128652  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:52.128695  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:52.144014  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0916 18:16:52.144571  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:52.145165  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:52.145190  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:52.145517  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:52.145690  397599 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:16:52.147221  397599 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:16:52.147241  397599 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:16:52.147590  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:52.147635  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:52.163893  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0916 18:16:52.164425  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:52.165111  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:52.165142  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:52.165524  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:52.165778  397599 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:16:52.168716  397599 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:52.169146  397599 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:16:52.169173  397599 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:52.169309  397599 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:16:52.169703  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:52.169753  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:52.186254  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32839
	I0916 18:16:52.186711  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:52.187165  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:52.187188  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:52.187583  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:52.187802  397599 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:16:52.188040  397599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:52.188079  397599 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:16:52.190775  397599 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:52.191180  397599 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:16:52.191207  397599 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:52.191315  397599 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:16:52.191489  397599 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:16:52.191652  397599 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:16:52.191788  397599 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:16:52.284124  397599 ssh_runner.go:195] Run: systemctl --version
	I0916 18:16:52.291370  397599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:52.307645  397599 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:16:52.307690  397599 api_server.go:166] Checking apiserver status ...
	I0916 18:16:52.307727  397599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:16:52.322704  397599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:16:52.335342  397599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:16:52.335406  397599 ssh_runner.go:195] Run: ls
	I0916 18:16:52.340434  397599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:16:52.345223  397599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:16:52.345254  397599 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:16:52.345280  397599 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:16:52.345300  397599 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:16:52.345732  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:52.345785  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:52.361483  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0916 18:16:52.362040  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:52.362579  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:52.362609  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:52.362917  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:52.363144  397599 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:16:52.364689  397599 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:16:52.364704  397599 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:16:52.365033  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:52.365077  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:52.380426  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0916 18:16:52.380937  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:52.381492  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:52.381513  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:52.381864  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:52.382059  397599 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:16:52.385014  397599 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:52.385421  397599 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:16:52.385447  397599 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:52.385624  397599 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:16:52.385937  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:52.386000  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:52.401937  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33155
	I0916 18:16:52.402472  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:52.402982  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:52.403003  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:52.403326  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:52.403522  397599 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:16:52.403703  397599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:52.403725  397599 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:16:52.406311  397599 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:52.406718  397599 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:16:52.406733  397599 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:52.406906  397599 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:16:52.407089  397599 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:16:52.407391  397599 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:16:52.407558  397599 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:16:54.929264  397599 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:16:54.929419  397599 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:16:54.929446  397599 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:16:54.929455  397599 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:16:54.929474  397599 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:16:54.929482  397599 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:16:54.929814  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:54.929858  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:54.945192  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46551
	I0916 18:16:54.945733  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:54.946210  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:54.946229  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:54.946573  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:54.946779  397599 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:16:54.948589  397599 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:16:54.948613  397599 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:16:54.948936  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:54.948979  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:54.964287  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0916 18:16:54.964700  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:54.965228  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:54.965255  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:54.965595  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:54.965802  397599 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:16:54.968589  397599 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:54.969008  397599 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:16:54.969025  397599 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:54.969199  397599 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:16:54.969526  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:54.969568  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:54.984960  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0916 18:16:54.985439  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:54.985957  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:54.985980  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:54.986364  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:54.986524  397599 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:16:54.986667  397599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:54.986691  397599 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:16:54.989359  397599 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:54.989814  397599 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:16:54.989842  397599 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:16:54.990000  397599 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:16:54.990168  397599 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:16:54.990298  397599 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:16:54.990431  397599 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:16:55.073767  397599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:55.091240  397599 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:16:55.091268  397599 api_server.go:166] Checking apiserver status ...
	I0916 18:16:55.091305  397599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:16:55.106258  397599 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:16:55.117600  397599 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:16:55.117662  397599 ssh_runner.go:195] Run: ls
	I0916 18:16:55.123390  397599 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:16:55.130841  397599 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:16:55.130871  397599 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:16:55.130880  397599 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:16:55.130910  397599 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:16:55.131281  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:55.131323  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:55.146732  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37461
	I0916 18:16:55.147262  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:55.147734  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:55.147757  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:55.148073  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:55.148297  397599 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:16:55.149977  397599 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:16:55.149997  397599 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:16:55.150394  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:55.150442  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:55.166520  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0916 18:16:55.167031  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:55.167535  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:55.167558  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:55.167904  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:55.168092  397599 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:16:55.170789  397599 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:55.171212  397599 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:16:55.171245  397599 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:55.171479  397599 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:16:55.171827  397599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:55.171874  397599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:55.188396  397599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37235
	I0916 18:16:55.188888  397599 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:55.189356  397599 main.go:141] libmachine: Using API Version  1
	I0916 18:16:55.189379  397599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:55.189703  397599 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:55.189883  397599 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:16:55.190066  397599 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:55.190089  397599 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:16:55.193450  397599 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:55.194299  397599 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:16:55.194329  397599 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:16:55.194505  397599 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:16:55.194736  397599 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:16:55.194894  397599 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:16:55.195041  397599 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:16:55.281840  397599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:55.297399  397599 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (5.23088723s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:16:56.256674  397699 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:16:56.256947  397699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:16:56.256959  397699 out.go:358] Setting ErrFile to fd 2...
	I0916 18:16:56.256965  397699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:16:56.257214  397699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:16:56.257396  397699 out.go:352] Setting JSON to false
	I0916 18:16:56.257429  397699 mustload.go:65] Loading cluster: ha-365438
	I0916 18:16:56.257523  397699 notify.go:220] Checking for updates...
	I0916 18:16:56.257839  397699 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:16:56.257855  397699 status.go:255] checking status of ha-365438 ...
	I0916 18:16:56.258274  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:56.258341  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:56.274782  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33685
	I0916 18:16:56.275279  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:56.275882  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:16:56.275907  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:56.276402  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:56.276664  397699 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:16:56.278420  397699 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:16:56.278442  397699 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:16:56.278785  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:56.278831  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:56.294234  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I0916 18:16:56.294715  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:56.295243  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:16:56.295277  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:56.295645  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:56.295845  397699 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:16:56.298903  397699 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:56.299360  397699 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:16:56.299398  397699 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:56.299564  397699 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:16:56.300017  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:56.300071  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:56.315221  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38949
	I0916 18:16:56.315664  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:56.316127  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:16:56.316152  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:56.316484  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:56.316679  397699 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:16:56.316887  397699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:56.316909  397699 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:16:56.320009  397699 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:56.320629  397699 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:16:56.320671  397699 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:16:56.320949  397699 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:16:56.321125  397699 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:16:56.321294  397699 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:16:56.321438  397699 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:16:56.404885  397699 ssh_runner.go:195] Run: systemctl --version
	I0916 18:16:56.411080  397699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:16:56.425626  397699 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:16:56.425683  397699 api_server.go:166] Checking apiserver status ...
	I0916 18:16:56.425729  397699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:16:56.439725  397699 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:16:56.449243  397699 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:16:56.449302  397699 ssh_runner.go:195] Run: ls
	I0916 18:16:56.453715  397699 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:16:56.460308  397699 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:16:56.460335  397699 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:16:56.460346  397699 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:16:56.460361  397699 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:16:56.460680  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:56.460729  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:56.476784  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0916 18:16:56.477256  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:56.477858  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:16:56.477878  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:56.478256  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:56.478490  397699 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:16:56.480119  397699 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:16:56.480145  397699 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:16:56.480477  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:56.480512  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:56.496257  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0916 18:16:56.496807  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:56.497354  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:16:56.497374  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:56.497685  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:56.497865  397699 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:16:56.500782  397699 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:56.501280  397699 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:16:56.501310  397699 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:56.501628  397699 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:16:56.502169  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:16:56.502275  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:16:56.517866  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0916 18:16:56.518416  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:16:56.518939  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:16:56.518975  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:16:56.519422  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:16:56.519674  397699 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:16:56.519883  397699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:16:56.519905  397699 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:16:56.522724  397699 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:56.523111  397699 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:16:56.523137  397699 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:16:56.523269  397699 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:16:56.523448  397699 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:16:56.523612  397699 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:16:56.523756  397699 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:16:58.001277  397699 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:16:58.001338  397699 retry.go:31] will retry after 189.476811ms: dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:01.073280  397699 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:01.073435  397699 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:17:01.073460  397699 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:01.073479  397699 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:17:01.073500  397699 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:01.073507  397699 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:17:01.073822  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:01.073865  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:01.089979  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0916 18:17:01.090414  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:01.090917  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:17:01.090940  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:01.091351  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:01.091588  397699 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:01.093241  397699 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:17:01.093258  397699 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:01.093580  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:01.093621  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:01.109986  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I0916 18:17:01.110499  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:01.111012  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:17:01.111035  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:01.111353  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:01.111557  397699 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:17:01.114347  397699 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:01.114788  397699 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:01.114811  397699 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:01.114947  397699 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:01.115372  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:01.115420  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:01.131217  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0916 18:17:01.131704  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:01.132230  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:17:01.132251  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:01.132594  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:01.132818  397699 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:01.133024  397699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:01.133050  397699 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:01.135803  397699 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:01.136280  397699 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:01.136308  397699 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:01.136439  397699 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:01.136614  397699 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:01.136766  397699 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:01.136934  397699 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:01.221265  397699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:01.239717  397699 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:01.239748  397699 api_server.go:166] Checking apiserver status ...
	I0916 18:17:01.239782  397699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:01.259417  397699 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:17:01.271572  397699 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:01.271633  397699 ssh_runner.go:195] Run: ls
	I0916 18:17:01.276987  397699 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:01.281831  397699 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:01.281858  397699 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:17:01.281867  397699 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:01.281883  397699 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:17:01.282177  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:01.282221  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:01.297888  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34493
	I0916 18:17:01.298393  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:01.298932  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:17:01.298957  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:01.299275  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:01.299470  397699 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:01.301111  397699 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:17:01.301128  397699 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:01.301413  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:01.301453  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:01.316638  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0916 18:17:01.317161  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:01.317713  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:17:01.317740  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:01.318021  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:01.318202  397699 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:17:01.321143  397699 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:01.321555  397699 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:01.321582  397699 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:01.321740  397699 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:01.322161  397699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:01.322217  397699 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:01.337500  397699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34797
	I0916 18:17:01.337995  397699 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:01.338522  397699 main.go:141] libmachine: Using API Version  1
	I0916 18:17:01.338543  397699 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:01.338874  397699 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:01.339048  397699 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:01.339248  397699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:01.339272  397699 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:01.341983  397699 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:01.342435  397699 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:01.342465  397699 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:01.342568  397699 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:01.342748  397699 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:01.342897  397699 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:01.343058  397699 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:01.425494  397699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:01.440107  397699 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (5.018944728s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:17:02.607235  397807 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:17:02.607383  397807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:02.607394  397807 out.go:358] Setting ErrFile to fd 2...
	I0916 18:17:02.607401  397807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:02.607590  397807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:17:02.607760  397807 out.go:352] Setting JSON to false
	I0916 18:17:02.607790  397807 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:02.607850  397807 notify.go:220] Checking for updates...
	I0916 18:17:02.608393  397807 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:02.608419  397807 status.go:255] checking status of ha-365438 ...
	I0916 18:17:02.608951  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:02.608999  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:02.624463  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0916 18:17:02.624954  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:02.625553  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:02.625578  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:02.626051  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:02.626291  397807 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:17:02.628180  397807 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:17:02.628201  397807 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:02.628665  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:02.628726  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:02.644485  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42457
	I0916 18:17:02.644899  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:02.645430  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:02.645452  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:02.645764  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:02.645974  397807 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:17:02.648638  397807 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:02.649166  397807 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:02.649199  397807 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:02.649408  397807 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:02.649735  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:02.649781  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:02.666047  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41179
	I0916 18:17:02.666586  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:02.667121  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:02.667148  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:02.667492  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:02.667730  397807 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:17:02.667954  397807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:02.668000  397807 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:17:02.671080  397807 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:02.671571  397807 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:02.671610  397807 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:02.671756  397807 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:17:02.671933  397807 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:17:02.672094  397807 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:17:02.672280  397807 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:17:02.757384  397807 ssh_runner.go:195] Run: systemctl --version
	I0916 18:17:02.763804  397807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:02.780858  397807 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:02.780901  397807 api_server.go:166] Checking apiserver status ...
	I0916 18:17:02.780969  397807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:02.796615  397807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:17:02.808495  397807 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:02.808565  397807 ssh_runner.go:195] Run: ls
	I0916 18:17:02.813427  397807 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:02.819822  397807 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:02.819852  397807 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:17:02.819873  397807 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:02.819891  397807 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:17:02.820247  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:02.820311  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:02.835724  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0916 18:17:02.836242  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:02.836755  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:02.836778  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:02.837129  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:02.837335  397807 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:17:02.838966  397807 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:17:02.838982  397807 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:02.839366  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:02.839429  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:02.854527  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40679
	I0916 18:17:02.854988  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:02.855534  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:02.855557  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:02.855892  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:02.856068  397807 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:17:02.858978  397807 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:02.859306  397807 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:02.859328  397807 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:02.859490  397807 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:02.859807  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:02.859852  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:02.875095  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0916 18:17:02.875638  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:02.876176  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:02.876198  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:02.876653  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:02.876862  397807 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:17:02.877117  397807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:02.877141  397807 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:17:02.880307  397807 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:02.880773  397807 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:02.880801  397807 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:02.880966  397807 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:17:02.881131  397807 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:17:02.881232  397807 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:17:02.881423  397807 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:17:04.145240  397807 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:04.145295  397807 retry.go:31] will retry after 232.280177ms: dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:07.217228  397807 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:07.217374  397807 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:17:07.217406  397807 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:07.217417  397807 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:17:07.217453  397807 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:07.217461  397807 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:17:07.217904  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:07.217965  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:07.233385  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40503
	I0916 18:17:07.233965  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:07.234455  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:07.234475  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:07.234856  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:07.235071  397807 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:07.237076  397807 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:17:07.237094  397807 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:07.237388  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:07.237434  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:07.253454  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32907
	I0916 18:17:07.253892  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:07.254423  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:07.254448  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:07.254968  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:07.255238  397807 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:17:07.258302  397807 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:07.258755  397807 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:07.258780  397807 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:07.259101  397807 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:07.259565  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:07.259623  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:07.275218  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
	I0916 18:17:07.275752  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:07.276298  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:07.276322  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:07.276762  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:07.276997  397807 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:07.277211  397807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:07.277244  397807 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:07.280547  397807 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:07.281041  397807 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:07.281080  397807 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:07.281253  397807 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:07.281463  397807 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:07.281630  397807 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:07.281777  397807 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:07.364815  397807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:07.381162  397807 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:07.381198  397807 api_server.go:166] Checking apiserver status ...
	I0916 18:17:07.381237  397807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:07.397438  397807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:17:07.408574  397807 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:07.408656  397807 ssh_runner.go:195] Run: ls
	I0916 18:17:07.413194  397807 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:07.417970  397807 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:07.417995  397807 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:17:07.418003  397807 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:07.418026  397807 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:17:07.418342  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:07.418374  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:07.434124  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37379
	I0916 18:17:07.434616  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:07.435062  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:07.435086  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:07.435448  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:07.435671  397807 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:07.437353  397807 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:17:07.437372  397807 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:07.437801  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:07.437847  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:07.453004  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0916 18:17:07.453501  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:07.454041  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:07.454063  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:07.454419  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:07.454638  397807 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:17:07.458048  397807 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:07.458534  397807 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:07.458565  397807 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:07.458753  397807 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:07.459075  397807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:07.459117  397807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:07.475404  397807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34115
	I0916 18:17:07.475965  397807 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:07.476475  397807 main.go:141] libmachine: Using API Version  1
	I0916 18:17:07.476495  397807 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:07.476869  397807 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:07.477076  397807 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:07.477283  397807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:07.477310  397807 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:07.480152  397807 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:07.480493  397807 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:07.480512  397807 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:07.480714  397807 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:07.480906  397807 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:07.481074  397807 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:07.481202  397807 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:07.564616  397807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:07.579278  397807 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (4.541770954s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:17:09.456229  397923 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:17:09.456367  397923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:09.456376  397923 out.go:358] Setting ErrFile to fd 2...
	I0916 18:17:09.456380  397923 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:09.456557  397923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:17:09.456734  397923 out.go:352] Setting JSON to false
	I0916 18:17:09.456763  397923 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:09.456832  397923 notify.go:220] Checking for updates...
	I0916 18:17:09.457200  397923 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:09.457221  397923 status.go:255] checking status of ha-365438 ...
	I0916 18:17:09.457615  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:09.457668  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:09.473794  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40867
	I0916 18:17:09.474300  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:09.475002  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:09.475032  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:09.475435  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:09.475658  397923 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:17:09.477385  397923 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:17:09.477404  397923 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:09.477841  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:09.477895  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:09.493543  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40317
	I0916 18:17:09.494037  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:09.494574  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:09.494609  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:09.494909  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:09.495081  397923 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:17:09.498057  397923 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:09.498541  397923 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:09.498561  397923 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:09.498741  397923 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:09.499021  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:09.499078  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:09.514115  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0916 18:17:09.514565  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:09.515082  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:09.515104  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:09.515517  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:09.515776  397923 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:17:09.515967  397923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:09.516022  397923 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:17:09.519184  397923 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:09.519720  397923 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:09.519749  397923 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:09.519906  397923 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:17:09.520091  397923 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:17:09.520278  397923 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:17:09.520437  397923 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:17:09.606007  397923 ssh_runner.go:195] Run: systemctl --version
	I0916 18:17:09.612620  397923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:09.629386  397923 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:09.629429  397923 api_server.go:166] Checking apiserver status ...
	I0916 18:17:09.629469  397923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:09.644506  397923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:17:09.654782  397923 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:09.654849  397923 ssh_runner.go:195] Run: ls
	I0916 18:17:09.659162  397923 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:09.663270  397923 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:09.663296  397923 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:17:09.663308  397923 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:09.663335  397923 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:17:09.663682  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:09.663734  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:09.679238  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33989
	I0916 18:17:09.679754  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:09.680244  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:09.680265  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:09.680579  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:09.680767  397923 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:17:09.682211  397923 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:17:09.682230  397923 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:09.682532  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:09.682565  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:09.697778  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0916 18:17:09.698150  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:09.698623  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:09.698643  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:09.699000  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:09.699222  397923 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:17:09.701823  397923 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:09.702262  397923 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:09.702286  397923 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:09.702427  397923 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:09.702787  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:09.702832  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:09.717881  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38397
	I0916 18:17:09.718400  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:09.718926  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:09.718950  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:09.719289  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:09.719510  397923 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:17:09.719681  397923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:09.719704  397923 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:17:09.722532  397923 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:09.723001  397923 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:09.723028  397923 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:09.723153  397923 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:17:09.723337  397923 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:17:09.723473  397923 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:17:09.723598  397923 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:17:10.289162  397923 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:10.289244  397923 retry.go:31] will retry after 222.420554ms: dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:13.585161  397923 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:13.585263  397923 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:17:13.585299  397923 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:13.585311  397923 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:17:13.585336  397923 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:13.585343  397923 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:17:13.585689  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:13.585735  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:13.601113  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45119
	I0916 18:17:13.601653  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:13.602200  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:13.602224  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:13.602595  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:13.602840  397923 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:13.604314  397923 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:17:13.604333  397923 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:13.604653  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:13.604687  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:13.619419  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0916 18:17:13.619794  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:13.620248  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:13.620276  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:13.620592  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:13.620804  397923 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:17:13.623446  397923 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:13.623880  397923 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:13.623901  397923 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:13.624049  397923 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:13.624340  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:13.624379  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:13.639139  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33757
	I0916 18:17:13.639650  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:13.640172  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:13.640201  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:13.640534  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:13.640703  397923 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:13.640956  397923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:13.640988  397923 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:13.643640  397923 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:13.643973  397923 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:13.644013  397923 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:13.644122  397923 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:13.644269  397923 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:13.644384  397923 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:13.644510  397923 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:13.725590  397923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:13.743285  397923 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:13.743319  397923 api_server.go:166] Checking apiserver status ...
	I0916 18:17:13.743363  397923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:13.759659  397923 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:17:13.777306  397923 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:13.777369  397923 ssh_runner.go:195] Run: ls
	I0916 18:17:13.782379  397923 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:13.788988  397923 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:13.789019  397923 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:17:13.789029  397923 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:13.789044  397923 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:17:13.789358  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:13.789400  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:13.806313  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36005
	I0916 18:17:13.806845  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:13.807335  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:13.807360  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:13.807713  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:13.807990  397923 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:13.809638  397923 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:17:13.809659  397923 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:13.810128  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:13.810187  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:13.825994  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35607
	I0916 18:17:13.826486  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:13.827053  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:13.827103  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:13.827493  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:13.827755  397923 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:17:13.831388  397923 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:13.831912  397923 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:13.831944  397923 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:13.832125  397923 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:13.832483  397923 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:13.832543  397923 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:13.847541  397923 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0916 18:17:13.848110  397923 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:13.848650  397923 main.go:141] libmachine: Using API Version  1
	I0916 18:17:13.848673  397923 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:13.849051  397923 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:13.849267  397923 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:13.849444  397923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:13.849464  397923 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:13.852389  397923 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:13.852859  397923 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:13.852884  397923 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:13.853007  397923 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:13.853174  397923 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:13.853353  397923 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:13.853495  397923 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:13.936352  397923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:13.950348  397923 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (3.743030274s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:17:16.401765  398022 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:17:16.401905  398022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:16.401917  398022 out.go:358] Setting ErrFile to fd 2...
	I0916 18:17:16.401923  398022 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:16.402155  398022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:17:16.402331  398022 out.go:352] Setting JSON to false
	I0916 18:17:16.402365  398022 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:16.402486  398022 notify.go:220] Checking for updates...
	I0916 18:17:16.402940  398022 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:16.402964  398022 status.go:255] checking status of ha-365438 ...
	I0916 18:17:16.403467  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:16.403512  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:16.419544  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46239
	I0916 18:17:16.420041  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:16.420617  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:16.420638  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:16.421096  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:16.421298  398022 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:17:16.422910  398022 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:17:16.422927  398022 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:16.423281  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:16.423326  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:16.438866  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I0916 18:17:16.439278  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:16.439760  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:16.439775  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:16.440085  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:16.440304  398022 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:17:16.442829  398022 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:16.443358  398022 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:16.443407  398022 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:16.443614  398022 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:16.443901  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:16.443944  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:16.459196  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39683
	I0916 18:17:16.459639  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:16.460119  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:16.460140  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:16.460535  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:16.460717  398022 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:17:16.460930  398022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:16.460972  398022 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:17:16.463832  398022 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:16.464333  398022 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:16.464373  398022 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:16.464632  398022 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:17:16.464818  398022 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:17:16.464955  398022 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:17:16.465105  398022 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:17:16.549044  398022 ssh_runner.go:195] Run: systemctl --version
	I0916 18:17:16.554945  398022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:16.569211  398022 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:16.569250  398022 api_server.go:166] Checking apiserver status ...
	I0916 18:17:16.569300  398022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:16.584034  398022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:17:16.593457  398022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:16.593520  398022 ssh_runner.go:195] Run: ls
	I0916 18:17:16.598139  398022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:16.602425  398022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:16.602455  398022 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:17:16.602467  398022 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:16.602483  398022 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:17:16.602780  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:16.602816  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:16.618016  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33063
	I0916 18:17:16.618520  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:16.619054  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:16.619073  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:16.619460  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:16.619687  398022 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:17:16.621373  398022 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:17:16.621394  398022 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:16.621746  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:16.621797  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:16.637379  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0916 18:17:16.637760  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:16.638243  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:16.638263  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:16.638582  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:16.638779  398022 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:17:16.641676  398022 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:16.642119  398022 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:16.642153  398022 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:16.642288  398022 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:16.642625  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:16.642671  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:16.657601  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0916 18:17:16.658130  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:16.658860  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:16.658879  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:16.659187  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:16.659380  398022 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:17:16.659557  398022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:16.659580  398022 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:17:16.662716  398022 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:16.663099  398022 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:16.663127  398022 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:16.663291  398022 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:17:16.663482  398022 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:17:16.663624  398022 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:17:16.663806  398022 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:17:19.733199  398022 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:19.733304  398022 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:17:19.733321  398022 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:19.733329  398022 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:17:19.733361  398022 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:19.733368  398022 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:17:19.733699  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:19.733744  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:19.749251  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I0916 18:17:19.749752  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:19.750318  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:19.750341  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:19.750735  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:19.750914  398022 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:19.752595  398022 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:17:19.752615  398022 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:19.752972  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:19.753020  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:19.771058  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36027
	I0916 18:17:19.771449  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:19.771926  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:19.771950  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:19.772258  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:19.772427  398022 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:17:19.774840  398022 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:19.775310  398022 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:19.775341  398022 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:19.775453  398022 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:19.775855  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:19.775905  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:19.791587  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I0916 18:17:19.792036  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:19.792551  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:19.792606  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:19.792975  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:19.793138  398022 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:19.793285  398022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:19.793308  398022 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:19.796411  398022 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:19.796942  398022 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:19.796983  398022 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:19.797111  398022 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:19.797280  398022 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:19.797402  398022 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:19.797516  398022 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:19.882012  398022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:19.896434  398022 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:19.896466  398022 api_server.go:166] Checking apiserver status ...
	I0916 18:17:19.896519  398022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:19.915027  398022 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:17:19.926141  398022 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:19.926217  398022 ssh_runner.go:195] Run: ls
	I0916 18:17:19.931094  398022 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:19.935584  398022 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:19.935615  398022 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:17:19.935630  398022 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:19.935648  398022 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:17:19.935945  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:19.935981  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:19.951048  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0916 18:17:19.951581  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:19.952069  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:19.952092  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:19.952413  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:19.952608  398022 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:19.954125  398022 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:17:19.954141  398022 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:19.954470  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:19.954514  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:19.969868  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0916 18:17:19.970348  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:19.970777  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:19.970798  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:19.971200  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:19.971365  398022 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:17:19.974698  398022 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:19.975148  398022 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:19.975186  398022 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:19.975306  398022 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:19.975645  398022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:19.975699  398022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:19.992169  398022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42337
	I0916 18:17:19.992714  398022 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:19.993455  398022 main.go:141] libmachine: Using API Version  1
	I0916 18:17:19.993485  398022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:19.993906  398022 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:19.994115  398022 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:19.994349  398022 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:19.994375  398022 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:19.996892  398022 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:19.997385  398022 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:19.997418  398022 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:19.997551  398022 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:19.997702  398022 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:19.997830  398022 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:19.997926  398022 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:20.084799  398022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:20.099125  398022 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (3.769517247s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:17:24.871833  398138 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:17:24.871949  398138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:24.871954  398138 out.go:358] Setting ErrFile to fd 2...
	I0916 18:17:24.871958  398138 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:24.872177  398138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:17:24.872354  398138 out.go:352] Setting JSON to false
	I0916 18:17:24.872385  398138 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:24.872501  398138 notify.go:220] Checking for updates...
	I0916 18:17:24.872856  398138 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:24.872871  398138 status.go:255] checking status of ha-365438 ...
	I0916 18:17:24.873324  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:24.873379  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:24.890538  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0916 18:17:24.891066  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:24.891798  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:24.891835  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:24.892258  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:24.892457  398138 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:17:24.894084  398138 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:17:24.894108  398138 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:24.894538  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:24.894588  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:24.909969  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41235
	I0916 18:17:24.910379  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:24.910823  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:24.910849  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:24.911169  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:24.911340  398138 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:17:24.914127  398138 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:24.914548  398138 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:24.914574  398138 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:24.914724  398138 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:24.915043  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:24.915082  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:24.933206  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40747
	I0916 18:17:24.933707  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:24.934317  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:24.934350  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:24.934744  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:24.934968  398138 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:17:24.935154  398138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:24.935193  398138 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:17:24.938152  398138 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:24.938557  398138 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:24.938587  398138 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:24.938690  398138 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:17:24.938897  398138 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:17:24.939037  398138 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:17:24.939184  398138 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:17:25.021560  398138 ssh_runner.go:195] Run: systemctl --version
	I0916 18:17:25.028018  398138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:25.048229  398138 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:25.048276  398138 api_server.go:166] Checking apiserver status ...
	I0916 18:17:25.048326  398138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:25.063466  398138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:17:25.080836  398138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:25.080957  398138 ssh_runner.go:195] Run: ls
	I0916 18:17:25.085756  398138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:25.090133  398138 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:25.090161  398138 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:17:25.090175  398138 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:25.090197  398138 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:17:25.090580  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:25.090623  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:25.106525  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44229
	I0916 18:17:25.107054  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:25.107522  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:25.107545  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:25.107936  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:25.108112  398138 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:17:25.109664  398138 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:17:25.109682  398138 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:25.109989  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:25.110032  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:25.125028  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0916 18:17:25.125459  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:25.125958  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:25.125985  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:25.126325  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:25.126551  398138 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:17:25.129468  398138 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:25.129897  398138 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:25.129928  398138 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:25.130045  398138 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:17:25.130426  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:25.130473  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:25.146116  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0916 18:17:25.146543  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:25.146995  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:25.147015  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:25.147350  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:25.147670  398138 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:17:25.147961  398138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:25.147985  398138 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:17:25.150826  398138 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:25.151202  398138 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:17:25.151230  398138 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:17:25.151383  398138 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:17:25.151560  398138 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:17:25.151687  398138 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:17:25.151783  398138 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	W0916 18:17:28.209178  398138 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.18:22: connect: no route to host
	W0916 18:17:28.209313  398138 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	E0916 18:17:28.209339  398138 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:28.209348  398138 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:17:28.209376  398138 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.18:22: connect: no route to host
	I0916 18:17:28.209387  398138 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:17:28.209788  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:28.209842  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:28.226217  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0916 18:17:28.226671  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:28.227919  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:28.227946  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:28.228327  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:28.228511  398138 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:28.234730  398138 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:17:28.234764  398138 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:28.235178  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:28.235231  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:28.252688  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42659
	I0916 18:17:28.253158  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:28.253669  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:28.253697  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:28.254036  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:28.254723  398138 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:17:28.257733  398138 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:28.258194  398138 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:28.258239  398138 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:28.258363  398138 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:28.258712  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:28.258753  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:28.275680  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0916 18:17:28.276180  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:28.276698  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:28.276719  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:28.277148  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:28.277343  398138 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:28.277554  398138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:28.277579  398138 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:28.280781  398138 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:28.281295  398138 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:28.281319  398138 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:28.281483  398138 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:28.281674  398138 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:28.281819  398138 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:28.281984  398138 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:28.368525  398138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:28.388388  398138 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:28.388422  398138 api_server.go:166] Checking apiserver status ...
	I0916 18:17:28.388455  398138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:28.405080  398138 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:17:28.416904  398138 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:28.416972  398138 ssh_runner.go:195] Run: ls
	I0916 18:17:28.421677  398138 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:28.429474  398138 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:28.429504  398138 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:17:28.429513  398138 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:28.429530  398138 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:17:28.429846  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:28.429908  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:28.444761  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0916 18:17:28.445354  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:28.445902  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:28.445926  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:28.446264  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:28.446478  398138 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:28.448268  398138 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:17:28.448288  398138 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:28.448593  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:28.448636  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:28.464369  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0916 18:17:28.464795  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:28.465317  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:28.465339  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:28.465682  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:28.465903  398138 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:17:28.468699  398138 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:28.469215  398138 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:28.469255  398138 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:28.469404  398138 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:28.469715  398138 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:28.469754  398138 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:28.485813  398138 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0916 18:17:28.486298  398138 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:28.486799  398138 main.go:141] libmachine: Using API Version  1
	I0916 18:17:28.486820  398138 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:28.487142  398138 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:28.487351  398138 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:28.487531  398138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:28.487550  398138 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:28.490303  398138 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:28.490735  398138 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:28.490764  398138 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:28.490941  398138 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:28.491130  398138 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:28.491279  398138 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:28.491402  398138 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:28.576957  398138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:28.592820  398138 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 7 (634.31603ms)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-365438-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:17:35.564881  398276 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:17:35.565180  398276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:35.565190  398276 out.go:358] Setting ErrFile to fd 2...
	I0916 18:17:35.565195  398276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:35.565418  398276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:17:35.565634  398276 out.go:352] Setting JSON to false
	I0916 18:17:35.565672  398276 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:35.565782  398276 notify.go:220] Checking for updates...
	I0916 18:17:35.566152  398276 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:35.566167  398276 status.go:255] checking status of ha-365438 ...
	I0916 18:17:35.566652  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.566713  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.582645  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0916 18:17:35.583177  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.583808  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.583840  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.584256  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.584452  398276 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:17:35.586377  398276 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:17:35.586393  398276 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:35.586726  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.586783  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.602277  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45901
	I0916 18:17:35.602697  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.603286  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.603325  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.603683  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.603889  398276 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:17:35.606845  398276 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:35.607249  398276 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:35.607292  398276 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:35.607461  398276 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:17:35.607838  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.607909  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.623616  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39995
	I0916 18:17:35.624051  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.624539  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.624559  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.624854  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.625059  398276 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:17:35.625262  398276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:35.625292  398276 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:17:35.628120  398276 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:35.628550  398276 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:17:35.628566  398276 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:17:35.628685  398276 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:17:35.628867  398276 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:17:35.629048  398276 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:17:35.629183  398276 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:17:35.713134  398276 ssh_runner.go:195] Run: systemctl --version
	I0916 18:17:35.719593  398276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:35.734985  398276 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:35.735027  398276 api_server.go:166] Checking apiserver status ...
	I0916 18:17:35.735064  398276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:35.749443  398276 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup
	W0916 18:17:35.759672  398276 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1069/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:35.759727  398276 ssh_runner.go:195] Run: ls
	I0916 18:17:35.764433  398276 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:35.768812  398276 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:35.768832  398276 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:17:35.768843  398276 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:35.768865  398276 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:17:35.769187  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.769230  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.784099  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34689
	I0916 18:17:35.784555  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.785146  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.785166  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.785479  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.785682  398276 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:17:35.787400  398276 status.go:330] ha-365438-m02 host status = "Stopped" (err=<nil>)
	I0916 18:17:35.787414  398276 status.go:343] host is not running, skipping remaining checks
	I0916 18:17:35.787420  398276 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:35.787437  398276 status.go:255] checking status of ha-365438-m03 ...
	I0916 18:17:35.787732  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.787775  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.803246  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0916 18:17:35.803881  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.804409  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.804434  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.804863  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.805111  398276 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:35.806867  398276 status.go:330] ha-365438-m03 host status = "Running" (err=<nil>)
	I0916 18:17:35.806886  398276 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:35.807181  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.807225  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.823472  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0916 18:17:35.823910  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.824404  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.824425  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.824750  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.824933  398276 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:17:35.827770  398276 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:35.828276  398276 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:35.828303  398276 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:35.828484  398276 host.go:66] Checking if "ha-365438-m03" exists ...
	I0916 18:17:35.828785  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.828821  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:35.843929  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43809
	I0916 18:17:35.844435  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:35.844991  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:35.845026  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:35.845352  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:35.845646  398276 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:35.845846  398276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:35.845871  398276 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:35.848610  398276 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:35.849031  398276 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:35.849061  398276 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:35.849176  398276 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:35.849340  398276 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:35.849485  398276 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:35.849641  398276 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:35.932761  398276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:35.948554  398276 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:17:35.948589  398276 api_server.go:166] Checking apiserver status ...
	I0916 18:17:35.948633  398276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:17:35.963328  398276 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup
	W0916 18:17:35.974437  398276 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1397/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:17:35.974495  398276 ssh_runner.go:195] Run: ls
	I0916 18:17:35.978911  398276 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:17:35.983728  398276 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:17:35.983759  398276 status.go:422] ha-365438-m03 apiserver status = Running (err=<nil>)
	I0916 18:17:35.983770  398276 status.go:257] ha-365438-m03 status: &{Name:ha-365438-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:17:35.983786  398276 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:17:35.984105  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:35.984141  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:36.000003  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
	I0916 18:17:36.000546  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:36.001171  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:36.001201  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:36.001551  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:36.001767  398276 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:36.003661  398276 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:17:36.003682  398276 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:36.003989  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:36.004036  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:36.021276  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
	I0916 18:17:36.021782  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:36.022343  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:36.022375  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:36.022719  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:36.022913  398276 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:17:36.025578  398276 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:36.026069  398276 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:36.026099  398276 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:36.026255  398276 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:17:36.026552  398276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:36.026607  398276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:36.045980  398276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I0916 18:17:36.046491  398276 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:36.047038  398276 main.go:141] libmachine: Using API Version  1
	I0916 18:17:36.047061  398276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:36.047470  398276 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:36.047668  398276 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:36.047860  398276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:17:36.047885  398276 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:36.050743  398276 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:36.051221  398276 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:36.051251  398276 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:36.051423  398276 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:36.051583  398276 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:36.051747  398276 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:36.051915  398276 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:36.136625  398276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:17:36.151998  398276 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-365438 -n ha-365438
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-365438 logs -n 25: (1.44519789s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438:/home/docker/cp-test_ha-365438-m03_ha-365438.txt                       |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438 sudo cat                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438.txt                                 |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m04 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp testdata/cp-test.txt                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438:/home/docker/cp-test_ha-365438-m04_ha-365438.txt                       |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438 sudo cat                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438.txt                                 |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03:/home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m03 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-365438 node stop m02 -v=7                                                     | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-365438 node start m02 -v=7                                                    | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 18:09:45
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 18:09:45.861740  392787 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:09:45.861864  392787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:09:45.861873  392787 out.go:358] Setting ErrFile to fd 2...
	I0916 18:09:45.861876  392787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:09:45.862039  392787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:09:45.862626  392787 out.go:352] Setting JSON to false
	I0916 18:09:45.863602  392787 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6729,"bootTime":1726503457,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:09:45.863708  392787 start.go:139] virtualization: kvm guest
	I0916 18:09:45.865949  392787 out.go:177] * [ha-365438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:09:45.867472  392787 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:09:45.867509  392787 notify.go:220] Checking for updates...
	I0916 18:09:45.870430  392787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:09:45.872039  392787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:09:45.873613  392787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:09:45.875149  392787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:09:45.876420  392787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:09:45.877805  392787 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:09:45.913887  392787 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 18:09:45.915112  392787 start.go:297] selected driver: kvm2
	I0916 18:09:45.915124  392787 start.go:901] validating driver "kvm2" against <nil>
	I0916 18:09:45.915137  392787 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:09:45.915845  392787 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:09:45.915944  392787 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 18:09:45.931147  392787 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 18:09:45.931218  392787 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 18:09:45.931517  392787 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:09:45.931559  392787 cni.go:84] Creating CNI manager for ""
	I0916 18:09:45.931612  392787 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 18:09:45.931620  392787 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 18:09:45.931682  392787 start.go:340] cluster config:
	{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 18:09:45.931778  392787 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:09:45.933943  392787 out.go:177] * Starting "ha-365438" primary control-plane node in "ha-365438" cluster
	I0916 18:09:45.935381  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:09:45.935438  392787 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 18:09:45.935448  392787 cache.go:56] Caching tarball of preloaded images
	I0916 18:09:45.935550  392787 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:09:45.935561  392787 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:09:45.935870  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:09:45.935895  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json: {Name:mkb6c5565eaaa6718155d06cabf91699df9faa1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:09:45.936041  392787 start.go:360] acquireMachinesLock for ha-365438: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:09:45.936069  392787 start.go:364] duration metric: took 15.895µs to acquireMachinesLock for "ha-365438"
	I0916 18:09:45.936085  392787 start.go:93] Provisioning new machine with config: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:09:45.936144  392787 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 18:09:45.937672  392787 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 18:09:45.937824  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:09:45.937874  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:09:45.952974  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43601
	I0916 18:09:45.953548  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:09:45.954158  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:09:45.954181  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:09:45.954547  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:09:45.954720  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:09:45.954868  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:09:45.955015  392787 start.go:159] libmachine.API.Create for "ha-365438" (driver="kvm2")
	I0916 18:09:45.955048  392787 client.go:168] LocalClient.Create starting
	I0916 18:09:45.955096  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 18:09:45.955136  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:09:45.955157  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:09:45.955234  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 18:09:45.955262  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:09:45.955283  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:09:45.955309  392787 main.go:141] libmachine: Running pre-create checks...
	I0916 18:09:45.955321  392787 main.go:141] libmachine: (ha-365438) Calling .PreCreateCheck
	I0916 18:09:45.955657  392787 main.go:141] libmachine: (ha-365438) Calling .GetConfigRaw
	I0916 18:09:45.956025  392787 main.go:141] libmachine: Creating machine...
	I0916 18:09:45.956040  392787 main.go:141] libmachine: (ha-365438) Calling .Create
	I0916 18:09:45.956186  392787 main.go:141] libmachine: (ha-365438) Creating KVM machine...
	I0916 18:09:45.957461  392787 main.go:141] libmachine: (ha-365438) DBG | found existing default KVM network
	I0916 18:09:45.958151  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:45.958019  392810 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0916 18:09:45.958250  392787 main.go:141] libmachine: (ha-365438) DBG | created network xml: 
	I0916 18:09:45.958269  392787 main.go:141] libmachine: (ha-365438) DBG | <network>
	I0916 18:09:45.958279  392787 main.go:141] libmachine: (ha-365438) DBG |   <name>mk-ha-365438</name>
	I0916 18:09:45.958289  392787 main.go:141] libmachine: (ha-365438) DBG |   <dns enable='no'/>
	I0916 18:09:45.958297  392787 main.go:141] libmachine: (ha-365438) DBG |   
	I0916 18:09:45.958305  392787 main.go:141] libmachine: (ha-365438) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 18:09:45.958316  392787 main.go:141] libmachine: (ha-365438) DBG |     <dhcp>
	I0916 18:09:45.958327  392787 main.go:141] libmachine: (ha-365438) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 18:09:45.958336  392787 main.go:141] libmachine: (ha-365438) DBG |     </dhcp>
	I0916 18:09:45.958364  392787 main.go:141] libmachine: (ha-365438) DBG |   </ip>
	I0916 18:09:45.958372  392787 main.go:141] libmachine: (ha-365438) DBG |   
	I0916 18:09:45.958376  392787 main.go:141] libmachine: (ha-365438) DBG | </network>
	I0916 18:09:45.958403  392787 main.go:141] libmachine: (ha-365438) DBG | 
	I0916 18:09:45.963564  392787 main.go:141] libmachine: (ha-365438) DBG | trying to create private KVM network mk-ha-365438 192.168.39.0/24...
	I0916 18:09:46.030993  392787 main.go:141] libmachine: (ha-365438) DBG | private KVM network mk-ha-365438 192.168.39.0/24 created
	I0916 18:09:46.031030  392787 main.go:141] libmachine: (ha-365438) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438 ...
	I0916 18:09:46.031043  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.030933  392810 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:09:46.031099  392787 main.go:141] libmachine: (ha-365438) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 18:09:46.031127  392787 main.go:141] libmachine: (ha-365438) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 18:09:46.302314  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.302075  392810 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa...
	I0916 18:09:46.432576  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.432389  392810 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/ha-365438.rawdisk...
	I0916 18:09:46.432634  392787 main.go:141] libmachine: (ha-365438) DBG | Writing magic tar header
	I0916 18:09:46.432653  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438 (perms=drwx------)
	I0916 18:09:46.432673  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 18:09:46.432685  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 18:09:46.432701  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 18:09:46.432717  392787 main.go:141] libmachine: (ha-365438) DBG | Writing SSH key tar header
	I0916 18:09:46.432729  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:46.432504  392810 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438 ...
	I0916 18:09:46.432766  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438
	I0916 18:09:46.432803  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 18:09:46.432817  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:09:46.432833  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 18:09:46.432850  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 18:09:46.433024  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 18:09:46.433062  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home/jenkins
	I0916 18:09:46.433085  392787 main.go:141] libmachine: (ha-365438) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 18:09:46.433289  392787 main.go:141] libmachine: (ha-365438) DBG | Checking permissions on dir: /home
	I0916 18:09:46.433779  392787 main.go:141] libmachine: (ha-365438) Creating domain...
	I0916 18:09:46.433790  392787 main.go:141] libmachine: (ha-365438) DBG | Skipping /home - not owner
	I0916 18:09:46.434962  392787 main.go:141] libmachine: (ha-365438) define libvirt domain using xml: 
	I0916 18:09:46.434983  392787 main.go:141] libmachine: (ha-365438) <domain type='kvm'>
	I0916 18:09:46.434992  392787 main.go:141] libmachine: (ha-365438)   <name>ha-365438</name>
	I0916 18:09:46.434999  392787 main.go:141] libmachine: (ha-365438)   <memory unit='MiB'>2200</memory>
	I0916 18:09:46.435006  392787 main.go:141] libmachine: (ha-365438)   <vcpu>2</vcpu>
	I0916 18:09:46.435027  392787 main.go:141] libmachine: (ha-365438)   <features>
	I0916 18:09:46.435039  392787 main.go:141] libmachine: (ha-365438)     <acpi/>
	I0916 18:09:46.435045  392787 main.go:141] libmachine: (ha-365438)     <apic/>
	I0916 18:09:46.435052  392787 main.go:141] libmachine: (ha-365438)     <pae/>
	I0916 18:09:46.435059  392787 main.go:141] libmachine: (ha-365438)     
	I0916 18:09:46.435078  392787 main.go:141] libmachine: (ha-365438)   </features>
	I0916 18:09:46.435094  392787 main.go:141] libmachine: (ha-365438)   <cpu mode='host-passthrough'>
	I0916 18:09:46.435128  392787 main.go:141] libmachine: (ha-365438)   
	I0916 18:09:46.435159  392787 main.go:141] libmachine: (ha-365438)   </cpu>
	I0916 18:09:46.435168  392787 main.go:141] libmachine: (ha-365438)   <os>
	I0916 18:09:46.435174  392787 main.go:141] libmachine: (ha-365438)     <type>hvm</type>
	I0916 18:09:46.435186  392787 main.go:141] libmachine: (ha-365438)     <boot dev='cdrom'/>
	I0916 18:09:46.435196  392787 main.go:141] libmachine: (ha-365438)     <boot dev='hd'/>
	I0916 18:09:46.435204  392787 main.go:141] libmachine: (ha-365438)     <bootmenu enable='no'/>
	I0916 18:09:46.435210  392787 main.go:141] libmachine: (ha-365438)   </os>
	I0916 18:09:46.435221  392787 main.go:141] libmachine: (ha-365438)   <devices>
	I0916 18:09:46.435231  392787 main.go:141] libmachine: (ha-365438)     <disk type='file' device='cdrom'>
	I0916 18:09:46.435262  392787 main.go:141] libmachine: (ha-365438)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/boot2docker.iso'/>
	I0916 18:09:46.435282  392787 main.go:141] libmachine: (ha-365438)       <target dev='hdc' bus='scsi'/>
	I0916 18:09:46.435293  392787 main.go:141] libmachine: (ha-365438)       <readonly/>
	I0916 18:09:46.435303  392787 main.go:141] libmachine: (ha-365438)     </disk>
	I0916 18:09:46.435313  392787 main.go:141] libmachine: (ha-365438)     <disk type='file' device='disk'>
	I0916 18:09:46.435325  392787 main.go:141] libmachine: (ha-365438)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 18:09:46.435341  392787 main.go:141] libmachine: (ha-365438)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/ha-365438.rawdisk'/>
	I0916 18:09:46.435348  392787 main.go:141] libmachine: (ha-365438)       <target dev='hda' bus='virtio'/>
	I0916 18:09:46.435355  392787 main.go:141] libmachine: (ha-365438)     </disk>
	I0916 18:09:46.435362  392787 main.go:141] libmachine: (ha-365438)     <interface type='network'>
	I0916 18:09:46.435372  392787 main.go:141] libmachine: (ha-365438)       <source network='mk-ha-365438'/>
	I0916 18:09:46.435380  392787 main.go:141] libmachine: (ha-365438)       <model type='virtio'/>
	I0916 18:09:46.435391  392787 main.go:141] libmachine: (ha-365438)     </interface>
	I0916 18:09:46.435401  392787 main.go:141] libmachine: (ha-365438)     <interface type='network'>
	I0916 18:09:46.435423  392787 main.go:141] libmachine: (ha-365438)       <source network='default'/>
	I0916 18:09:46.435444  392787 main.go:141] libmachine: (ha-365438)       <model type='virtio'/>
	I0916 18:09:46.435456  392787 main.go:141] libmachine: (ha-365438)     </interface>
	I0916 18:09:46.435463  392787 main.go:141] libmachine: (ha-365438)     <serial type='pty'>
	I0916 18:09:46.435474  392787 main.go:141] libmachine: (ha-365438)       <target port='0'/>
	I0916 18:09:46.435482  392787 main.go:141] libmachine: (ha-365438)     </serial>
	I0916 18:09:46.435493  392787 main.go:141] libmachine: (ha-365438)     <console type='pty'>
	I0916 18:09:46.435503  392787 main.go:141] libmachine: (ha-365438)       <target type='serial' port='0'/>
	I0916 18:09:46.435515  392787 main.go:141] libmachine: (ha-365438)     </console>
	I0916 18:09:46.435530  392787 main.go:141] libmachine: (ha-365438)     <rng model='virtio'>
	I0916 18:09:46.435545  392787 main.go:141] libmachine: (ha-365438)       <backend model='random'>/dev/random</backend>
	I0916 18:09:46.435555  392787 main.go:141] libmachine: (ha-365438)     </rng>
	I0916 18:09:46.435564  392787 main.go:141] libmachine: (ha-365438)     
	I0916 18:09:46.435575  392787 main.go:141] libmachine: (ha-365438)     
	I0916 18:09:46.435584  392787 main.go:141] libmachine: (ha-365438)   </devices>
	I0916 18:09:46.435601  392787 main.go:141] libmachine: (ha-365438) </domain>
	I0916 18:09:46.435610  392787 main.go:141] libmachine: (ha-365438) 
	I0916 18:09:46.439784  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:2c:d8:d4 in network default
	I0916 18:09:46.440296  392787 main.go:141] libmachine: (ha-365438) Ensuring networks are active...
	I0916 18:09:46.440318  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:46.441001  392787 main.go:141] libmachine: (ha-365438) Ensuring network default is active
	I0916 18:09:46.441405  392787 main.go:141] libmachine: (ha-365438) Ensuring network mk-ha-365438 is active
	I0916 18:09:46.442094  392787 main.go:141] libmachine: (ha-365438) Getting domain xml...
	I0916 18:09:46.442842  392787 main.go:141] libmachine: (ha-365438) Creating domain...
	I0916 18:09:47.648947  392787 main.go:141] libmachine: (ha-365438) Waiting to get IP...
	I0916 18:09:47.649856  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:47.650278  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:47.650334  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:47.650266  392810 retry.go:31] will retry after 283.520836ms: waiting for machine to come up
	I0916 18:09:47.935866  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:47.936176  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:47.936236  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:47.936159  392810 retry.go:31] will retry after 297.837185ms: waiting for machine to come up
	I0916 18:09:48.235774  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:48.236190  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:48.236212  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:48.236162  392810 retry.go:31] will retry after 462.816213ms: waiting for machine to come up
	I0916 18:09:48.700878  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:48.701324  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:48.701351  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:48.701273  392810 retry.go:31] will retry after 370.07957ms: waiting for machine to come up
	I0916 18:09:49.072759  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:49.073273  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:49.073320  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:49.073248  392810 retry.go:31] will retry after 688.41688ms: waiting for machine to come up
	I0916 18:09:49.763134  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:49.763556  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:49.763584  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:49.763508  392810 retry.go:31] will retry after 795.125241ms: waiting for machine to come up
	I0916 18:09:50.560100  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:50.560622  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:50.560665  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:50.560550  392810 retry.go:31] will retry after 715.844297ms: waiting for machine to come up
	I0916 18:09:51.278294  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:51.278728  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:51.278756  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:51.278686  392810 retry.go:31] will retry after 1.137561072s: waiting for machine to come up
	I0916 18:09:52.417546  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:52.417920  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:52.417944  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:52.417885  392810 retry.go:31] will retry after 1.728480138s: waiting for machine to come up
	I0916 18:09:54.148897  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:54.149250  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:54.149280  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:54.149227  392810 retry.go:31] will retry after 1.540936278s: waiting for machine to come up
	I0916 18:09:55.691955  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:55.692373  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:55.692398  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:55.692323  392810 retry.go:31] will retry after 2.060258167s: waiting for machine to come up
	I0916 18:09:57.754937  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:09:57.755410  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:09:57.755438  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:09:57.755358  392810 retry.go:31] will retry after 2.807471229s: waiting for machine to come up
	I0916 18:10:00.566328  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:00.566758  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:10:00.566785  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:10:00.566704  392810 retry.go:31] will retry after 2.874102784s: waiting for machine to come up
	I0916 18:10:03.444413  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:03.444863  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find current IP address of domain ha-365438 in network mk-ha-365438
	I0916 18:10:03.444895  392787 main.go:141] libmachine: (ha-365438) DBG | I0916 18:10:03.444763  392810 retry.go:31] will retry after 5.017111787s: waiting for machine to come up
	I0916 18:10:08.465292  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.465900  392787 main.go:141] libmachine: (ha-365438) Found IP for machine: 192.168.39.165
	I0916 18:10:08.465929  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has current primary IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.465935  392787 main.go:141] libmachine: (ha-365438) Reserving static IP address...
	I0916 18:10:08.466341  392787 main.go:141] libmachine: (ha-365438) DBG | unable to find host DHCP lease matching {name: "ha-365438", mac: "52:54:00:aa:6c:bf", ip: "192.168.39.165"} in network mk-ha-365438
	I0916 18:10:08.541019  392787 main.go:141] libmachine: (ha-365438) DBG | Getting to WaitForSSH function...
	I0916 18:10:08.541056  392787 main.go:141] libmachine: (ha-365438) Reserved static IP address: 192.168.39.165
	I0916 18:10:08.541070  392787 main.go:141] libmachine: (ha-365438) Waiting for SSH to be available...
	I0916 18:10:08.543538  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.543895  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:minikube Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.543923  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.544095  392787 main.go:141] libmachine: (ha-365438) DBG | Using SSH client type: external
	I0916 18:10:08.544122  392787 main.go:141] libmachine: (ha-365438) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa (-rw-------)
	I0916 18:10:08.544168  392787 main.go:141] libmachine: (ha-365438) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 18:10:08.544186  392787 main.go:141] libmachine: (ha-365438) DBG | About to run SSH command:
	I0916 18:10:08.544200  392787 main.go:141] libmachine: (ha-365438) DBG | exit 0
	I0916 18:10:08.669263  392787 main.go:141] libmachine: (ha-365438) DBG | SSH cmd err, output: <nil>: 
	I0916 18:10:08.669543  392787 main.go:141] libmachine: (ha-365438) KVM machine creation complete!
	I0916 18:10:08.669922  392787 main.go:141] libmachine: (ha-365438) Calling .GetConfigRaw
	I0916 18:10:08.670493  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:08.670676  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:08.670858  392787 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 18:10:08.670873  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:08.672073  392787 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 18:10:08.672084  392787 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 18:10:08.672089  392787 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 18:10:08.672094  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:08.674253  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.674595  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.674621  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.674775  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:08.674931  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.675052  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.675159  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:08.675291  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:08.675499  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:08.675513  392787 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 18:10:08.784730  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:08.784756  392787 main.go:141] libmachine: Detecting the provisioner...
	I0916 18:10:08.784765  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:08.787646  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.787961  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.787988  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.788205  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:08.788435  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.788617  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.788756  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:08.788961  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:08.789182  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:08.789200  392787 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 18:10:08.897712  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 18:10:08.897775  392787 main.go:141] libmachine: found compatible host: buildroot
	I0916 18:10:08.897782  392787 main.go:141] libmachine: Provisioning with buildroot...
	I0916 18:10:08.897789  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:10:08.898041  392787 buildroot.go:166] provisioning hostname "ha-365438"
	I0916 18:10:08.898070  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:10:08.898265  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:08.900576  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.901066  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:08.901098  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:08.901253  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:08.901446  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.901645  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:08.901751  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:08.901927  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:08.902111  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:08.902122  392787 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438 && echo "ha-365438" | sudo tee /etc/hostname
	I0916 18:10:09.024770  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438
	
	I0916 18:10:09.024806  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.027664  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.027985  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.028009  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.028250  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.028462  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.028647  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.028784  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.029008  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:09.029184  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:09.029199  392787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:10:09.148460  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:09.148498  392787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:10:09.148553  392787 buildroot.go:174] setting up certificates
	I0916 18:10:09.148565  392787 provision.go:84] configureAuth start
	I0916 18:10:09.148578  392787 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:10:09.148870  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:09.151619  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.151998  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.152025  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.152184  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.154538  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.154865  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.154889  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.155061  392787 provision.go:143] copyHostCerts
	I0916 18:10:09.155093  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:09.155127  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:10:09.155138  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:09.155205  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:10:09.155296  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:09.155313  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:10:09.155320  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:09.155343  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:10:09.155400  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:09.155417  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:10:09.155426  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:09.155446  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:10:09.155511  392787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438 san=[127.0.0.1 192.168.39.165 ha-365438 localhost minikube]
	I0916 18:10:09.255332  392787 provision.go:177] copyRemoteCerts
	I0916 18:10:09.255403  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:10:09.255437  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.258231  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.258551  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.258577  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.258711  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.258908  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.259042  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.259151  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:09.344339  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:10:09.344416  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 18:10:09.369182  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:10:09.369258  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:10:09.394472  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:10:09.394552  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 18:10:09.419548  392787 provision.go:87] duration metric: took 270.959045ms to configureAuth
	I0916 18:10:09.419586  392787 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:10:09.419837  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:09.419933  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.422595  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.422966  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.422993  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.423176  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.423397  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.423637  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.423798  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.423944  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:09.424166  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:09.424182  392787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:10:09.649181  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:10:09.649215  392787 main.go:141] libmachine: Checking connection to Docker...
	I0916 18:10:09.649240  392787 main.go:141] libmachine: (ha-365438) Calling .GetURL
	I0916 18:10:09.650612  392787 main.go:141] libmachine: (ha-365438) DBG | Using libvirt version 6000000
	I0916 18:10:09.652753  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.653207  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.653278  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.653396  392787 main.go:141] libmachine: Docker is up and running!
	I0916 18:10:09.653409  392787 main.go:141] libmachine: Reticulating splines...
	I0916 18:10:09.653416  392787 client.go:171] duration metric: took 23.698357841s to LocalClient.Create
	I0916 18:10:09.653440  392787 start.go:167] duration metric: took 23.698426057s to libmachine.API.Create "ha-365438"
	I0916 18:10:09.653449  392787 start.go:293] postStartSetup for "ha-365438" (driver="kvm2")
	I0916 18:10:09.653459  392787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:10:09.653477  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.653791  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:10:09.653826  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.656119  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.656574  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.656599  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.656723  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.656904  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.657095  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.657220  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:09.744116  392787 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:10:09.748447  392787 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:10:09.748477  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:10:09.748543  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:10:09.748666  392787 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:10:09.748684  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:10:09.748800  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:10:09.758575  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:09.783524  392787 start.go:296] duration metric: took 130.056288ms for postStartSetup
	I0916 18:10:09.783612  392787 main.go:141] libmachine: (ha-365438) Calling .GetConfigRaw
	I0916 18:10:09.784359  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:09.786896  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.787272  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.787302  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.787596  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:09.787817  392787 start.go:128] duration metric: took 23.851663044s to createHost
	I0916 18:10:09.787843  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.790222  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.790469  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.790492  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.790649  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.790844  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.791032  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.791191  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.791344  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:09.791541  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:10:09.791559  392787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:10:09.902572  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510209.877644731
	
	I0916 18:10:09.902626  392787 fix.go:216] guest clock: 1726510209.877644731
	I0916 18:10:09.902638  392787 fix.go:229] Guest: 2024-09-16 18:10:09.877644731 +0000 UTC Remote: 2024-09-16 18:10:09.787831605 +0000 UTC m=+23.962305313 (delta=89.813126ms)
	I0916 18:10:09.902671  392787 fix.go:200] guest clock delta is within tolerance: 89.813126ms
	I0916 18:10:09.902683  392787 start.go:83] releasing machines lock for "ha-365438", held for 23.966604338s
	I0916 18:10:09.902714  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.902983  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:09.905268  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.905547  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.905589  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.905696  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.906225  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.906452  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:09.906551  392787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:10:09.906603  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.906638  392787 ssh_runner.go:195] Run: cat /version.json
	I0916 18:10:09.906665  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:09.909274  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909303  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909658  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.909702  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:09.909727  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909808  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:09.909859  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.910046  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.910048  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:09.910237  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:09.910248  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.910457  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:09.910445  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:09.910571  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:10.020506  392787 ssh_runner.go:195] Run: systemctl --version
	I0916 18:10:10.026746  392787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:10:10.186605  392787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:10:10.192998  392787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:10:10.193074  392787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:10:10.210382  392787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 18:10:10.210412  392787 start.go:495] detecting cgroup driver to use...
	I0916 18:10:10.210482  392787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:10:10.227369  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:10:10.242414  392787 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:10:10.242485  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:10:10.257131  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:10:10.271966  392787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:10:10.391099  392787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:10:10.572487  392787 docker.go:233] disabling docker service ...
	I0916 18:10:10.572566  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:10:10.588966  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:10:10.601981  392787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:10:10.740636  392787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:10:10.878326  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:10:10.892590  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:10:10.911709  392787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:10:10.911775  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.922389  392787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:10:10.922465  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.933274  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.944462  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.955915  392787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:10:10.967551  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.979310  392787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:10.998237  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:11.009805  392787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:10:11.019885  392787 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 18:10:11.019951  392787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 18:10:11.033562  392787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:10:11.044563  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:10:11.172744  392787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:10:11.271253  392787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:10:11.271339  392787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:10:11.276484  392787 start.go:563] Will wait 60s for crictl version
	I0916 18:10:11.276555  392787 ssh_runner.go:195] Run: which crictl
	I0916 18:10:11.280518  392787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:10:11.321488  392787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:10:11.321594  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:11.350882  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:11.381527  392787 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:10:11.382847  392787 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:10:11.385449  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:11.385812  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:11.385839  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:11.386079  392787 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:10:11.390612  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:10:11.406408  392787 kubeadm.go:883] updating cluster {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 18:10:11.406535  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:10:11.406590  392787 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:10:11.447200  392787 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 18:10:11.447268  392787 ssh_runner.go:195] Run: which lz4
	I0916 18:10:11.451561  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0916 18:10:11.451682  392787 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 18:10:11.456239  392787 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 18:10:11.456268  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 18:10:12.876494  392787 crio.go:462] duration metric: took 1.4248413s to copy over tarball
	I0916 18:10:12.876584  392787 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 18:10:14.935900  392787 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.059286389s)
	I0916 18:10:14.935940  392787 crio.go:469] duration metric: took 2.059412063s to extract the tarball
	I0916 18:10:14.935951  392787 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 18:10:14.973313  392787 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:10:15.019757  392787 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:10:15.019785  392787 cache_images.go:84] Images are preloaded, skipping loading
	I0916 18:10:15.019793  392787 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.31.1 crio true true} ...
	I0916 18:10:15.019895  392787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:10:15.019965  392787 ssh_runner.go:195] Run: crio config
	I0916 18:10:15.074859  392787 cni.go:84] Creating CNI manager for ""
	I0916 18:10:15.074884  392787 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 18:10:15.074896  392787 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 18:10:15.074922  392787 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-365438 NodeName:ha-365438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 18:10:15.075071  392787 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-365438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 18:10:15.075097  392787 kube-vip.go:115] generating kube-vip config ...
	I0916 18:10:15.075140  392787 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:10:15.093642  392787 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:10:15.093768  392787 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:10:15.093826  392787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:10:15.104325  392787 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 18:10:15.104413  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 18:10:15.115282  392787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0916 18:10:15.133359  392787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:10:15.151228  392787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0916 18:10:15.169219  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0916 18:10:15.187557  392787 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:10:15.192161  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:10:15.206388  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:10:15.342949  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:10:15.359967  392787 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.165
	I0916 18:10:15.359996  392787 certs.go:194] generating shared ca certs ...
	I0916 18:10:15.360015  392787 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.360194  392787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:10:15.360258  392787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:10:15.360273  392787 certs.go:256] generating profile certs ...
	I0916 18:10:15.360337  392787 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:10:15.360373  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt with IP's: []
	I0916 18:10:15.551306  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt ...
	I0916 18:10:15.551342  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt: {Name:mkc3db8b1101003a3b29c04d7b8c9aeb779fd32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.551543  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key ...
	I0916 18:10:15.551560  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key: {Name:mk23aeda90888d0044ea468a8c24dd15a14c193f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.551673  392787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa
	I0916 18:10:15.551692  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.254]
	I0916 18:10:15.656888  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa ...
	I0916 18:10:15.656947  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa: {Name:mke35516cd8bcea2b1e4bff6c9e1c4b746bd51cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.657136  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa ...
	I0916 18:10:15.657154  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa: {Name:mk67396fd6a5e04a27321be953e22e674a4f06bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.657257  392787 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.96db92fa -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:10:15.657356  392787 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.96db92fa -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:10:15.657460  392787 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:10:15.657481  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt with IP's: []
	I0916 18:10:15.940352  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt ...
	I0916 18:10:15.940389  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt: {Name:mke3aeb0e02e8ca7bf96d4b2cba27ef685c7b48a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.940580  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key ...
	I0916 18:10:15.940595  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key: {Name:mkb18a35b9920b50dca88235e28388a5820fbec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:15.940690  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:10:15.940713  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:10:15.940729  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:10:15.940750  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:10:15.940767  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:10:15.940790  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:10:15.940808  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:10:15.940838  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:10:15.940906  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:10:15.940975  392787 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:10:15.940990  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:10:15.941028  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:10:15.941072  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:10:15.941108  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:10:15.941164  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:15.941204  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:15.941225  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:10:15.941243  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:10:15.941910  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:10:15.968962  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:10:15.994450  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:10:16.020858  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:10:16.048300  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 18:10:16.074581  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 18:10:16.100221  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:10:16.125842  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:10:16.154371  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:10:16.179696  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:10:16.214450  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:10:16.240446  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 18:10:16.259949  392787 ssh_runner.go:195] Run: openssl version
	I0916 18:10:16.266188  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:10:16.278092  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:16.283093  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:16.283168  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:16.289592  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:10:16.301039  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:10:16.312436  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:10:16.317338  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:10:16.317443  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:10:16.323451  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:10:16.334583  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:10:16.346904  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:10:16.351957  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:10:16.352006  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:10:16.358300  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:10:16.370577  392787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:10:16.375213  392787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 18:10:16.375275  392787 kubeadm.go:392] StartCluster: {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:10:16.375380  392787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 18:10:16.375457  392787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 18:10:16.418815  392787 cri.go:89] found id: ""
	I0916 18:10:16.418883  392787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 18:10:16.429042  392787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 18:10:16.439116  392787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 18:10:16.448909  392787 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 18:10:16.448955  392787 kubeadm.go:157] found existing configuration files:
	
	I0916 18:10:16.449017  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 18:10:16.457939  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 18:10:16.457999  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 18:10:16.469172  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 18:10:16.478337  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 18:10:16.478410  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 18:10:16.489316  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 18:10:16.499123  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 18:10:16.499183  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 18:10:16.509331  392787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 18:10:16.519711  392787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 18:10:16.519778  392787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 18:10:16.529881  392787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 18:10:16.641425  392787 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 18:10:16.641531  392787 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 18:10:16.740380  392787 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 18:10:16.740525  392787 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 18:10:16.740686  392787 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 18:10:16.760499  392787 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 18:10:16.920961  392787 out.go:235]   - Generating certificates and keys ...
	I0916 18:10:16.921100  392787 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 18:10:16.921171  392787 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 18:10:16.998342  392787 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 18:10:17.125003  392787 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 18:10:17.361090  392787 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 18:10:17.742955  392787 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 18:10:17.849209  392787 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 18:10:17.849413  392787 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-365438 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0916 18:10:17.928825  392787 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 18:10:17.929089  392787 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-365438 localhost] and IPs [192.168.39.165 127.0.0.1 ::1]
	I0916 18:10:18.075649  392787 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 18:10:18.204742  392787 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 18:10:18.245512  392787 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 18:10:18.245734  392787 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 18:10:18.659010  392787 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 18:10:18.872130  392787 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 18:10:18.929814  392787 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 18:10:19.311882  392787 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 18:10:19.409886  392787 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 18:10:19.410721  392787 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 18:10:19.414179  392787 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 18:10:19.514491  392787 out.go:235]   - Booting up control plane ...
	I0916 18:10:19.514679  392787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 18:10:19.514817  392787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 18:10:19.514921  392787 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 18:10:19.515110  392787 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 18:10:19.515272  392787 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 18:10:19.515350  392787 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 18:10:19.589659  392787 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 18:10:19.589838  392787 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 18:10:20.589849  392787 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001469256s
	I0916 18:10:20.589940  392787 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 18:10:26.435717  392787 kubeadm.go:310] [api-check] The API server is healthy after 5.848884759s
	I0916 18:10:26.453718  392787 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 18:10:26.466069  392787 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 18:10:26.493083  392787 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 18:10:26.493369  392787 kubeadm.go:310] [mark-control-plane] Marking the node ha-365438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 18:10:26.506186  392787 kubeadm.go:310] [bootstrap-token] Using token: tw4zgl.f8vkt3x516r20x53
	I0916 18:10:26.507638  392787 out.go:235]   - Configuring RBAC rules ...
	I0916 18:10:26.507809  392787 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 18:10:26.517833  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 18:10:26.531746  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 18:10:26.537095  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 18:10:26.541072  392787 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 18:10:26.548789  392787 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 18:10:26.844266  392787 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 18:10:27.277806  392787 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 18:10:27.842459  392787 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 18:10:27.844557  392787 kubeadm.go:310] 
	I0916 18:10:27.844677  392787 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 18:10:27.844691  392787 kubeadm.go:310] 
	I0916 18:10:27.844823  392787 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 18:10:27.844840  392787 kubeadm.go:310] 
	I0916 18:10:27.844874  392787 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 18:10:27.844971  392787 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 18:10:27.845041  392787 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 18:10:27.845051  392787 kubeadm.go:310] 
	I0916 18:10:27.845124  392787 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 18:10:27.845133  392787 kubeadm.go:310] 
	I0916 18:10:27.845194  392787 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 18:10:27.845204  392787 kubeadm.go:310] 
	I0916 18:10:27.845299  392787 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 18:10:27.845432  392787 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 18:10:27.845516  392787 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 18:10:27.845523  392787 kubeadm.go:310] 
	I0916 18:10:27.845646  392787 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 18:10:27.845731  392787 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 18:10:27.845738  392787 kubeadm.go:310] 
	I0916 18:10:27.845815  392787 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tw4zgl.f8vkt3x516r20x53 \
	I0916 18:10:27.845905  392787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e \
	I0916 18:10:27.845926  392787 kubeadm.go:310] 	--control-plane 
	I0916 18:10:27.845929  392787 kubeadm.go:310] 
	I0916 18:10:27.845998  392787 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 18:10:27.846003  392787 kubeadm.go:310] 
	I0916 18:10:27.846070  392787 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tw4zgl.f8vkt3x516r20x53 \
	I0916 18:10:27.846176  392787 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e 
	I0916 18:10:27.848408  392787 kubeadm.go:310] W0916 18:10:16.621020     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 18:10:27.848816  392787 kubeadm.go:310] W0916 18:10:16.621804     835 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 18:10:27.848993  392787 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 18:10:27.849055  392787 cni.go:84] Creating CNI manager for ""
	I0916 18:10:27.849070  392787 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 18:10:27.851663  392787 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 18:10:27.853746  392787 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 18:10:27.860026  392787 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 18:10:27.860053  392787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 18:10:27.881459  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 18:10:28.293048  392787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 18:10:28.293098  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:28.293102  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-365438 minikube.k8s.io/updated_at=2024_09_16T18_10_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=ha-365438 minikube.k8s.io/primary=true
	I0916 18:10:28.494326  392787 ops.go:34] apiserver oom_adj: -16
	I0916 18:10:28.494487  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:28.995554  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:29.494840  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:29.994554  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:30.494592  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:30.994910  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:31.495575  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:31.994604  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 18:10:32.184635  392787 kubeadm.go:1113] duration metric: took 3.891601216s to wait for elevateKubeSystemPrivileges
	I0916 18:10:32.184688  392787 kubeadm.go:394] duration metric: took 15.809420067s to StartCluster
	I0916 18:10:32.184718  392787 settings.go:142] acquiring lock: {Name:mk9af1b5fb868180f97a2648a387fb06c7d5fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:32.184834  392787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:10:32.185867  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/kubeconfig: {Name:mk8f19e4e61aad6cdecf3a2028815277e5ffb248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:32.186174  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 18:10:32.186174  392787 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:10:32.186202  392787 start.go:241] waiting for startup goroutines ...
	I0916 18:10:32.186221  392787 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 18:10:32.186312  392787 addons.go:69] Setting storage-provisioner=true in profile "ha-365438"
	I0916 18:10:32.186334  392787 addons.go:234] Setting addon storage-provisioner=true in "ha-365438"
	I0916 18:10:32.186390  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:10:32.186333  392787 addons.go:69] Setting default-storageclass=true in profile "ha-365438"
	I0916 18:10:32.186447  392787 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-365438"
	I0916 18:10:32.186489  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:32.186867  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.186891  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.186924  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.187014  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.203255  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0916 18:10:32.203442  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44615
	I0916 18:10:32.203855  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.203908  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.204477  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.204514  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.204633  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.204660  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.204976  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.205035  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.205232  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:32.205522  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.205573  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.207426  392787 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:10:32.207825  392787 kapi.go:59] client config for ha-365438: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt", KeyFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key", CAFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 18:10:32.208405  392787 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 18:10:32.208723  392787 addons.go:234] Setting addon default-storageclass=true in "ha-365438"
	I0916 18:10:32.208776  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:10:32.209194  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.209243  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.222111  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0916 18:10:32.222679  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.223233  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.223266  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.223690  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.223910  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:32.225573  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0916 18:10:32.225894  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:32.226036  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.226457  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.226474  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.226776  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.227398  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:32.227445  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:32.228101  392787 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 18:10:32.229529  392787 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 18:10:32.229551  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 18:10:32.229573  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:32.232705  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.232894  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:32.232960  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.233073  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:32.233317  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:32.233491  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:32.233658  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:32.243577  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39127
	I0916 18:10:32.244041  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:32.244603  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:32.244643  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:32.245037  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:32.245260  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:32.247169  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:32.247400  392787 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 18:10:32.247421  392787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 18:10:32.247445  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:32.250674  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.251107  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:32.251138  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:32.251306  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:32.251487  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:32.251614  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:32.251722  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:32.417097  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 18:10:32.437995  392787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 18:10:32.439876  392787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 18:10:33.053402  392787 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0916 18:10:33.053505  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.053531  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.053838  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.053851  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.053860  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.053866  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.054145  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.054163  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.054180  392787 main.go:141] libmachine: (ha-365438) DBG | Closing plugin on server side
	I0916 18:10:33.054230  392787 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 18:10:33.054249  392787 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 18:10:33.054345  392787 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 18:10:33.054354  392787 round_trippers.go:469] Request Headers:
	I0916 18:10:33.054364  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:10:33.054372  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:10:33.063977  392787 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 18:10:33.064590  392787 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 18:10:33.064605  392787 round_trippers.go:469] Request Headers:
	I0916 18:10:33.064612  392787 round_trippers.go:473]     Content-Type: application/json
	I0916 18:10:33.064625  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:10:33.064628  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:10:33.067585  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:10:33.067787  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.067804  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.068116  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.068138  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.068154  392787 main.go:141] libmachine: (ha-365438) DBG | Closing plugin on server side
	I0916 18:10:33.314618  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.314651  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.315003  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.315062  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.315081  392787 main.go:141] libmachine: Making call to close driver server
	I0916 18:10:33.315088  392787 main.go:141] libmachine: (ha-365438) Calling .Close
	I0916 18:10:33.315110  392787 main.go:141] libmachine: (ha-365438) DBG | Closing plugin on server side
	I0916 18:10:33.315385  392787 main.go:141] libmachine: Successfully made call to close driver server
	I0916 18:10:33.315403  392787 main.go:141] libmachine: Making call to close connection to plugin binary
	I0916 18:10:33.317232  392787 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 18:10:33.318799  392787 addons.go:510] duration metric: took 1.132579143s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 18:10:33.318848  392787 start.go:246] waiting for cluster config update ...
	I0916 18:10:33.318865  392787 start.go:255] writing updated cluster config ...
	I0916 18:10:33.320826  392787 out.go:201] 
	I0916 18:10:33.322359  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:33.322461  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:33.324743  392787 out.go:177] * Starting "ha-365438-m02" control-plane node in "ha-365438" cluster
	I0916 18:10:33.326567  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:10:33.326599  392787 cache.go:56] Caching tarball of preloaded images
	I0916 18:10:33.326724  392787 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:10:33.326741  392787 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:10:33.326828  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:33.327285  392787 start.go:360] acquireMachinesLock for ha-365438-m02: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:10:33.327372  392787 start.go:364] duration metric: took 64.213µs to acquireMachinesLock for "ha-365438-m02"
	I0916 18:10:33.327391  392787 start.go:93] Provisioning new machine with config: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:10:33.327457  392787 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0916 18:10:33.329287  392787 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 18:10:33.329421  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:33.329482  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:33.344726  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38413
	I0916 18:10:33.345292  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:33.345856  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:33.345885  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:33.346250  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:33.346458  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:33.346654  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:33.346828  392787 start.go:159] libmachine.API.Create for "ha-365438" (driver="kvm2")
	I0916 18:10:33.346884  392787 client.go:168] LocalClient.Create starting
	I0916 18:10:33.346999  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 18:10:33.347057  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:10:33.347081  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:10:33.347151  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 18:10:33.347178  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:10:33.347194  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:10:33.347217  392787 main.go:141] libmachine: Running pre-create checks...
	I0916 18:10:33.347228  392787 main.go:141] libmachine: (ha-365438-m02) Calling .PreCreateCheck
	I0916 18:10:33.347425  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetConfigRaw
	I0916 18:10:33.347823  392787 main.go:141] libmachine: Creating machine...
	I0916 18:10:33.347840  392787 main.go:141] libmachine: (ha-365438-m02) Calling .Create
	I0916 18:10:33.348010  392787 main.go:141] libmachine: (ha-365438-m02) Creating KVM machine...
	I0916 18:10:33.349416  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found existing default KVM network
	I0916 18:10:33.349576  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found existing private KVM network mk-ha-365438
	I0916 18:10:33.349710  392787 main.go:141] libmachine: (ha-365438-m02) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02 ...
	I0916 18:10:33.349734  392787 main.go:141] libmachine: (ha-365438-m02) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 18:10:33.349860  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.349743  393164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:10:33.349954  392787 main.go:141] libmachine: (ha-365438-m02) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 18:10:33.622442  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.622279  393164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa...
	I0916 18:10:33.683496  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.683324  393164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/ha-365438-m02.rawdisk...
	I0916 18:10:33.683530  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Writing magic tar header
	I0916 18:10:33.683544  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Writing SSH key tar header
	I0916 18:10:33.683554  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:33.683451  393164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02 ...
	I0916 18:10:33.683578  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02
	I0916 18:10:33.683589  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02 (perms=drwx------)
	I0916 18:10:33.683599  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 18:10:33.683613  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:10:33.683636  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 18:10:33.683649  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 18:10:33.683666  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 18:10:33.683677  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home/jenkins
	I0916 18:10:33.683689  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 18:10:33.683703  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 18:10:33.683715  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Checking permissions on dir: /home
	I0916 18:10:33.683729  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 18:10:33.683743  392787 main.go:141] libmachine: (ha-365438-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 18:10:33.683753  392787 main.go:141] libmachine: (ha-365438-m02) Creating domain...
	I0916 18:10:33.683764  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Skipping /home - not owner
	I0916 18:10:33.684740  392787 main.go:141] libmachine: (ha-365438-m02) define libvirt domain using xml: 
	I0916 18:10:33.684771  392787 main.go:141] libmachine: (ha-365438-m02) <domain type='kvm'>
	I0916 18:10:33.684783  392787 main.go:141] libmachine: (ha-365438-m02)   <name>ha-365438-m02</name>
	I0916 18:10:33.684795  392787 main.go:141] libmachine: (ha-365438-m02)   <memory unit='MiB'>2200</memory>
	I0916 18:10:33.684803  392787 main.go:141] libmachine: (ha-365438-m02)   <vcpu>2</vcpu>
	I0916 18:10:33.684807  392787 main.go:141] libmachine: (ha-365438-m02)   <features>
	I0916 18:10:33.684815  392787 main.go:141] libmachine: (ha-365438-m02)     <acpi/>
	I0916 18:10:33.684819  392787 main.go:141] libmachine: (ha-365438-m02)     <apic/>
	I0916 18:10:33.684824  392787 main.go:141] libmachine: (ha-365438-m02)     <pae/>
	I0916 18:10:33.684827  392787 main.go:141] libmachine: (ha-365438-m02)     
	I0916 18:10:33.684832  392787 main.go:141] libmachine: (ha-365438-m02)   </features>
	I0916 18:10:33.684837  392787 main.go:141] libmachine: (ha-365438-m02)   <cpu mode='host-passthrough'>
	I0916 18:10:33.684843  392787 main.go:141] libmachine: (ha-365438-m02)   
	I0916 18:10:33.684847  392787 main.go:141] libmachine: (ha-365438-m02)   </cpu>
	I0916 18:10:33.684854  392787 main.go:141] libmachine: (ha-365438-m02)   <os>
	I0916 18:10:33.684858  392787 main.go:141] libmachine: (ha-365438-m02)     <type>hvm</type>
	I0916 18:10:33.684896  392787 main.go:141] libmachine: (ha-365438-m02)     <boot dev='cdrom'/>
	I0916 18:10:33.684934  392787 main.go:141] libmachine: (ha-365438-m02)     <boot dev='hd'/>
	I0916 18:10:33.684950  392787 main.go:141] libmachine: (ha-365438-m02)     <bootmenu enable='no'/>
	I0916 18:10:33.684959  392787 main.go:141] libmachine: (ha-365438-m02)   </os>
	I0916 18:10:33.684968  392787 main.go:141] libmachine: (ha-365438-m02)   <devices>
	I0916 18:10:33.684979  392787 main.go:141] libmachine: (ha-365438-m02)     <disk type='file' device='cdrom'>
	I0916 18:10:33.684994  392787 main.go:141] libmachine: (ha-365438-m02)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/boot2docker.iso'/>
	I0916 18:10:33.685005  392787 main.go:141] libmachine: (ha-365438-m02)       <target dev='hdc' bus='scsi'/>
	I0916 18:10:33.685016  392787 main.go:141] libmachine: (ha-365438-m02)       <readonly/>
	I0916 18:10:33.685031  392787 main.go:141] libmachine: (ha-365438-m02)     </disk>
	I0916 18:10:33.685047  392787 main.go:141] libmachine: (ha-365438-m02)     <disk type='file' device='disk'>
	I0916 18:10:33.685066  392787 main.go:141] libmachine: (ha-365438-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 18:10:33.685082  392787 main.go:141] libmachine: (ha-365438-m02)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/ha-365438-m02.rawdisk'/>
	I0916 18:10:33.685094  392787 main.go:141] libmachine: (ha-365438-m02)       <target dev='hda' bus='virtio'/>
	I0916 18:10:33.685103  392787 main.go:141] libmachine: (ha-365438-m02)     </disk>
	I0916 18:10:33.685113  392787 main.go:141] libmachine: (ha-365438-m02)     <interface type='network'>
	I0916 18:10:33.685126  392787 main.go:141] libmachine: (ha-365438-m02)       <source network='mk-ha-365438'/>
	I0916 18:10:33.685138  392787 main.go:141] libmachine: (ha-365438-m02)       <model type='virtio'/>
	I0916 18:10:33.685145  392787 main.go:141] libmachine: (ha-365438-m02)     </interface>
	I0916 18:10:33.685156  392787 main.go:141] libmachine: (ha-365438-m02)     <interface type='network'>
	I0916 18:10:33.685166  392787 main.go:141] libmachine: (ha-365438-m02)       <source network='default'/>
	I0916 18:10:33.685177  392787 main.go:141] libmachine: (ha-365438-m02)       <model type='virtio'/>
	I0916 18:10:33.685186  392787 main.go:141] libmachine: (ha-365438-m02)     </interface>
	I0916 18:10:33.685208  392787 main.go:141] libmachine: (ha-365438-m02)     <serial type='pty'>
	I0916 18:10:33.685225  392787 main.go:141] libmachine: (ha-365438-m02)       <target port='0'/>
	I0916 18:10:33.685237  392787 main.go:141] libmachine: (ha-365438-m02)     </serial>
	I0916 18:10:33.685243  392787 main.go:141] libmachine: (ha-365438-m02)     <console type='pty'>
	I0916 18:10:33.685255  392787 main.go:141] libmachine: (ha-365438-m02)       <target type='serial' port='0'/>
	I0916 18:10:33.685262  392787 main.go:141] libmachine: (ha-365438-m02)     </console>
	I0916 18:10:33.685269  392787 main.go:141] libmachine: (ha-365438-m02)     <rng model='virtio'>
	I0916 18:10:33.685275  392787 main.go:141] libmachine: (ha-365438-m02)       <backend model='random'>/dev/random</backend>
	I0916 18:10:33.685282  392787 main.go:141] libmachine: (ha-365438-m02)     </rng>
	I0916 18:10:33.685286  392787 main.go:141] libmachine: (ha-365438-m02)     
	I0916 18:10:33.685293  392787 main.go:141] libmachine: (ha-365438-m02)     
	I0916 18:10:33.685301  392787 main.go:141] libmachine: (ha-365438-m02)   </devices>
	I0916 18:10:33.685319  392787 main.go:141] libmachine: (ha-365438-m02) </domain>
	I0916 18:10:33.685335  392787 main.go:141] libmachine: (ha-365438-m02) 
	I0916 18:10:33.692250  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:93:a9:f4 in network default
	I0916 18:10:33.692837  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:33.692879  392787 main.go:141] libmachine: (ha-365438-m02) Ensuring networks are active...
	I0916 18:10:33.693646  392787 main.go:141] libmachine: (ha-365438-m02) Ensuring network default is active
	I0916 18:10:33.693968  392787 main.go:141] libmachine: (ha-365438-m02) Ensuring network mk-ha-365438 is active
	I0916 18:10:33.694323  392787 main.go:141] libmachine: (ha-365438-m02) Getting domain xml...
	I0916 18:10:33.695108  392787 main.go:141] libmachine: (ha-365438-m02) Creating domain...
	I0916 18:10:34.930246  392787 main.go:141] libmachine: (ha-365438-m02) Waiting to get IP...
	I0916 18:10:34.930981  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:34.931456  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:34.931477  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:34.931417  393164 retry.go:31] will retry after 235.385827ms: waiting for machine to come up
	I0916 18:10:35.169108  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:35.169640  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:35.169666  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:35.169598  393164 retry.go:31] will retry after 348.78948ms: waiting for machine to come up
	I0916 18:10:35.520267  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:35.520777  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:35.520802  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:35.520722  393164 retry.go:31] will retry after 422.811372ms: waiting for machine to come up
	I0916 18:10:35.945450  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:35.945886  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:35.945909  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:35.945861  393164 retry.go:31] will retry after 520.351266ms: waiting for machine to come up
	I0916 18:10:36.467407  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:36.467900  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:36.467929  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:36.467852  393164 retry.go:31] will retry after 750.8123ms: waiting for machine to come up
	I0916 18:10:37.219915  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:37.220404  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:37.220438  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:37.220351  393164 retry.go:31] will retry after 878.610223ms: waiting for machine to come up
	I0916 18:10:38.100678  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:38.101223  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:38.101252  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:38.101151  393164 retry.go:31] will retry after 782.076333ms: waiting for machine to come up
	I0916 18:10:38.884536  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:38.884997  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:38.885027  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:38.884914  393164 retry.go:31] will retry after 1.480505092s: waiting for machine to come up
	I0916 18:10:40.366675  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:40.367305  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:40.367345  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:40.367246  393164 retry.go:31] will retry after 1.861407296s: waiting for machine to come up
	I0916 18:10:42.231317  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:42.231771  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:42.231798  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:42.231712  393164 retry.go:31] will retry after 1.504488445s: waiting for machine to come up
	I0916 18:10:43.737950  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:43.738233  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:43.738262  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:43.738200  393164 retry.go:31] will retry after 1.87598511s: waiting for machine to come up
	I0916 18:10:45.616256  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:45.616716  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:45.616744  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:45.616666  393164 retry.go:31] will retry after 2.223821755s: waiting for machine to come up
	I0916 18:10:47.843191  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:47.843584  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:47.843607  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:47.843532  393164 retry.go:31] will retry after 3.555447139s: waiting for machine to come up
	I0916 18:10:51.402441  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:51.402828  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find current IP address of domain ha-365438-m02 in network mk-ha-365438
	I0916 18:10:51.402853  392787 main.go:141] libmachine: (ha-365438-m02) DBG | I0916 18:10:51.402798  393164 retry.go:31] will retry after 3.446453336s: waiting for machine to come up
	I0916 18:10:54.850944  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.851447  392787 main.go:141] libmachine: (ha-365438-m02) Found IP for machine: 192.168.39.18
	I0916 18:10:54.851476  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has current primary IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.851485  392787 main.go:141] libmachine: (ha-365438-m02) Reserving static IP address...
	I0916 18:10:54.852073  392787 main.go:141] libmachine: (ha-365438-m02) DBG | unable to find host DHCP lease matching {name: "ha-365438-m02", mac: "52:54:00:e9:b2:f7", ip: "192.168.39.18"} in network mk-ha-365438
	I0916 18:10:54.927598  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Getting to WaitForSSH function...
	I0916 18:10:54.927633  392787 main.go:141] libmachine: (ha-365438-m02) Reserved static IP address: 192.168.39.18
	I0916 18:10:54.927647  392787 main.go:141] libmachine: (ha-365438-m02) Waiting for SSH to be available...
	I0916 18:10:54.930258  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.930667  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:54.930701  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:54.930942  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Using SSH client type: external
	I0916 18:10:54.930968  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa (-rw-------)
	I0916 18:10:54.931002  392787 main.go:141] libmachine: (ha-365438-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 18:10:54.931016  392787 main.go:141] libmachine: (ha-365438-m02) DBG | About to run SSH command:
	I0916 18:10:54.931049  392787 main.go:141] libmachine: (ha-365438-m02) DBG | exit 0
	I0916 18:10:55.061259  392787 main.go:141] libmachine: (ha-365438-m02) DBG | SSH cmd err, output: <nil>: 
	I0916 18:10:55.061561  392787 main.go:141] libmachine: (ha-365438-m02) KVM machine creation complete!
	I0916 18:10:55.061813  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetConfigRaw
	I0916 18:10:55.062383  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:55.062549  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:55.062742  392787 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 18:10:55.062756  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:10:55.064191  392787 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 18:10:55.064206  392787 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 18:10:55.064211  392787 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 18:10:55.064216  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.066231  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.066507  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.066535  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.066665  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.066836  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.066989  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.067125  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.067275  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.067508  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.067519  392787 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 18:10:55.180358  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:55.180406  392787 main.go:141] libmachine: Detecting the provisioner...
	I0916 18:10:55.180418  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.183181  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.183571  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.183599  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.183721  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.183916  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.184098  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.184207  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.184357  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.184579  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.184592  392787 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 18:10:55.298227  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 18:10:55.298321  392787 main.go:141] libmachine: found compatible host: buildroot
	I0916 18:10:55.298335  392787 main.go:141] libmachine: Provisioning with buildroot...
	I0916 18:10:55.298349  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:55.298609  392787 buildroot.go:166] provisioning hostname "ha-365438-m02"
	I0916 18:10:55.298629  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:55.298847  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.301662  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.302063  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.302091  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.302204  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.302398  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.302565  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.302721  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.302883  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.303092  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.303105  392787 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438-m02 && echo "ha-365438-m02" | sudo tee /etc/hostname
	I0916 18:10:55.431880  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438-m02
	
	I0916 18:10:55.431916  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.434778  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.435067  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.435101  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.435316  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.435517  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.435707  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.435817  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.435951  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.436169  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.436186  392787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:10:55.558139  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:10:55.558170  392787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:10:55.558191  392787 buildroot.go:174] setting up certificates
	I0916 18:10:55.558204  392787 provision.go:84] configureAuth start
	I0916 18:10:55.558216  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetMachineName
	I0916 18:10:55.558517  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:55.561254  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.561613  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.561646  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.561762  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.563980  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.564292  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.564319  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.564424  392787 provision.go:143] copyHostCerts
	I0916 18:10:55.564462  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:55.564501  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:10:55.564515  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:10:55.564595  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:10:55.564686  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:55.564704  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:10:55.564709  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:10:55.564735  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:10:55.564778  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:55.564794  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:10:55.564800  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:10:55.564820  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:10:55.564868  392787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438-m02 san=[127.0.0.1 192.168.39.18 ha-365438-m02 localhost minikube]
	I0916 18:10:55.659270  392787 provision.go:177] copyRemoteCerts
	I0916 18:10:55.659331  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:10:55.659357  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.662129  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.662465  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.662496  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.662767  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.662951  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.663118  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.663262  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:55.751469  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:10:55.751547  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:10:55.780545  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:10:55.780645  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 18:10:55.806978  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:10:55.807056  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 18:10:55.832267  392787 provision.go:87] duration metric: took 274.049415ms to configureAuth
	I0916 18:10:55.832301  392787 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:10:55.832484  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:55.832558  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:55.835052  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.835378  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:55.835424  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:55.835638  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:55.835858  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.836019  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:55.836161  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:55.836384  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:55.836602  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:55.836618  392787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:10:56.078921  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:10:56.078956  392787 main.go:141] libmachine: Checking connection to Docker...
	I0916 18:10:56.078965  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetURL
	I0916 18:10:56.080562  392787 main.go:141] libmachine: (ha-365438-m02) DBG | Using libvirt version 6000000
	I0916 18:10:56.084040  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.084426  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.084455  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.084620  392787 main.go:141] libmachine: Docker is up and running!
	I0916 18:10:56.084639  392787 main.go:141] libmachine: Reticulating splines...
	I0916 18:10:56.084647  392787 client.go:171] duration metric: took 22.737750267s to LocalClient.Create
	I0916 18:10:56.084670  392787 start.go:167] duration metric: took 22.737847372s to libmachine.API.Create "ha-365438"
	I0916 18:10:56.084681  392787 start.go:293] postStartSetup for "ha-365438-m02" (driver="kvm2")
	I0916 18:10:56.084691  392787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:10:56.084717  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.084957  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:10:56.084982  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:56.087111  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.087449  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.087481  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.087639  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.087785  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.087934  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.088041  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:56.176159  392787 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:10:56.181304  392787 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:10:56.181340  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:10:56.181418  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:10:56.181506  392787 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:10:56.181518  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:10:56.181637  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:10:56.191699  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:56.217543  392787 start.go:296] duration metric: took 132.846204ms for postStartSetup
	I0916 18:10:56.217609  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetConfigRaw
	I0916 18:10:56.218265  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:56.221258  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.221691  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.221719  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.222100  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:10:56.222316  392787 start.go:128] duration metric: took 22.894847796s to createHost
	I0916 18:10:56.222342  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:56.224636  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.224968  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.224995  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.225137  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.225322  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.225486  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.225671  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.225848  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:10:56.226032  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I0916 18:10:56.226042  392787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:10:56.341865  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510256.296605946
	
	I0916 18:10:56.341890  392787 fix.go:216] guest clock: 1726510256.296605946
	I0916 18:10:56.341897  392787 fix.go:229] Guest: 2024-09-16 18:10:56.296605946 +0000 UTC Remote: 2024-09-16 18:10:56.222328327 +0000 UTC m=+70.396802035 (delta=74.277619ms)
	I0916 18:10:56.341914  392787 fix.go:200] guest clock delta is within tolerance: 74.277619ms
	I0916 18:10:56.341919  392787 start.go:83] releasing machines lock for "ha-365438-m02", held for 23.014537993s
	I0916 18:10:56.341935  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.342207  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:56.345069  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.345454  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.345484  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.348193  392787 out.go:177] * Found network options:
	I0916 18:10:56.349645  392787 out.go:177]   - NO_PROXY=192.168.39.165
	W0916 18:10:56.351018  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:10:56.351055  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.351741  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.351947  392787 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:10:56.352065  392787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:10:56.352102  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	W0916 18:10:56.352342  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:10:56.352416  392787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:10:56.352434  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:10:56.354999  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355229  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355370  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.355395  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355545  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.355676  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:56.355697  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:56.355734  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.355857  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:10:56.355882  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.356053  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:10:56.356061  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:56.356189  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:10:56.356285  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:10:56.597368  392787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:10:56.604127  392787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:10:56.604217  392787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:10:56.621380  392787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 18:10:56.621409  392787 start.go:495] detecting cgroup driver to use...
	I0916 18:10:56.621472  392787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:10:56.638525  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:10:56.652832  392787 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:10:56.652895  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:10:56.666875  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:10:56.681432  392787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:10:56.794171  392787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:10:56.948541  392787 docker.go:233] disabling docker service ...
	I0916 18:10:56.948618  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:10:56.963290  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:10:56.977237  392787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:10:57.098314  392787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:10:57.214672  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:10:57.229040  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:10:57.250234  392787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:10:57.250298  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.261898  392787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:10:57.261986  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.273749  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.285791  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.297387  392787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:10:57.309408  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.320879  392787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.341575  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:10:57.354155  392787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:10:57.365273  392787 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 18:10:57.365348  392787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 18:10:57.378772  392787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:10:57.390283  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:10:57.514621  392787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:10:57.617876  392787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:10:57.617971  392787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:10:57.622722  392787 start.go:563] Will wait 60s for crictl version
	I0916 18:10:57.622780  392787 ssh_runner.go:195] Run: which crictl
	I0916 18:10:57.626607  392787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:10:57.666912  392787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:10:57.666997  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:57.696803  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:10:57.727098  392787 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:10:57.728534  392787 out.go:177]   - env NO_PROXY=192.168.39.165
	I0916 18:10:57.729864  392787 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:10:57.732684  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:57.733062  392787 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:48 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:10:57.733088  392787 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:10:57.733256  392787 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:10:57.737616  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:10:57.750177  392787 mustload.go:65] Loading cluster: ha-365438
	I0916 18:10:57.750375  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:10:57.750632  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:57.750679  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:57.766219  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I0916 18:10:57.766714  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:57.767204  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:57.767226  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:57.767545  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:57.767740  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:10:57.769216  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:10:57.769502  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:10:57.769538  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:10:57.784842  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0916 18:10:57.785407  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:10:57.785928  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:10:57.785950  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:10:57.786284  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:10:57.786496  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:10:57.786735  392787 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.18
	I0916 18:10:57.786749  392787 certs.go:194] generating shared ca certs ...
	I0916 18:10:57.786766  392787 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:57.786930  392787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:10:57.786978  392787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:10:57.786991  392787 certs.go:256] generating profile certs ...
	I0916 18:10:57.787090  392787 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:10:57.787123  392787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637
	I0916 18:10:57.787143  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.18 192.168.39.254]
	I0916 18:10:58.073914  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637 ...
	I0916 18:10:58.073946  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637: {Name:mkc37b2841fab59ca238ea965ad7556f32ca348d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:58.074141  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637 ...
	I0916 18:10:58.074162  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637: {Name:mk10897fb048b3932b74ff1e856667592d87e1c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:10:58.074262  392787 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.9479e637 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:10:58.074438  392787 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.9479e637 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:10:58.074692  392787 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:10:58.074712  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:10:58.074728  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:10:58.074747  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:10:58.074765  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:10:58.074781  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:10:58.074798  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:10:58.074812  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:10:58.074831  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:10:58.074897  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:10:58.074943  392787 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:10:58.074957  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:10:58.074990  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:10:58.075057  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:10:58.075090  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:10:58.075142  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:10:58.075180  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.075201  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.075220  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.075261  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:10:58.078319  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:58.078735  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:10:58.078764  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:10:58.078968  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:10:58.079158  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:10:58.079317  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:10:58.079445  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:10:58.153424  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 18:10:58.159338  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 18:10:58.171317  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 18:10:58.175718  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 18:10:58.186383  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 18:10:58.190833  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 18:10:58.202444  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 18:10:58.207018  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 18:10:58.218636  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 18:10:58.223899  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 18:10:58.236134  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 18:10:58.240865  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 18:10:58.251722  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:10:58.279030  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:10:58.304460  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:10:58.329385  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:10:58.354574  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 18:10:58.378950  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 18:10:58.404566  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:10:58.429261  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:10:58.454157  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:10:58.480818  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:10:58.505156  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:10:58.529092  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 18:10:58.545843  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 18:10:58.562312  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 18:10:58.579473  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 18:10:58.596583  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 18:10:58.614423  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 18:10:58.631330  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 18:10:58.648585  392787 ssh_runner.go:195] Run: openssl version
	I0916 18:10:58.654567  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:10:58.665082  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.669458  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.669527  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:10:58.675385  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:10:58.686367  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:10:58.696961  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.701655  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.701715  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:10:58.707782  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:10:58.718999  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:10:58.730368  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.735333  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.735404  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:10:58.741338  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:10:58.752083  392787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:10:58.756412  392787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 18:10:58.756467  392787 kubeadm.go:934] updating node {m02 192.168.39.18 8443 v1.31.1 crio true true} ...
	I0916 18:10:58.756563  392787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:10:58.756596  392787 kube-vip.go:115] generating kube-vip config ...
	I0916 18:10:58.756635  392787 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:10:58.773717  392787 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:10:58.773796  392787 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:10:58.773854  392787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:10:58.783882  392787 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 18:10:58.783972  392787 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 18:10:58.793538  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 18:10:58.793569  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:10:58.793613  392787 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 18:10:58.793638  392787 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 18:10:58.793671  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:10:58.798198  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 18:10:58.798226  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 18:10:59.865446  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:10:59.865538  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:10:59.870686  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 18:10:59.870730  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 18:10:59.898327  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:10:59.924669  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:10:59.924798  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:10:59.935712  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 18:10:59.935762  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 18:11:00.432610  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 18:11:00.442560  392787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 18:11:00.459845  392787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:11:00.476474  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 18:11:00.493079  392787 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:11:00.496926  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:11:00.508998  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:11:00.634897  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:11:00.652295  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:11:00.652800  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:11:00.652856  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:11:00.668024  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44227
	I0916 18:11:00.668537  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:11:00.669099  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:11:00.669129  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:11:00.669453  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:11:00.669589  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:11:00.669709  392787 start.go:317] joinCluster: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:11:00.669835  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 18:11:00.669857  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:11:00.672691  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:11:00.673188  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:11:00.673216  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:11:00.673356  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:11:00.673556  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:11:00.673716  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:11:00.673853  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:11:00.826960  392787 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:11:00.827002  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zk8jja.80gx1qy4gw2fhz4q --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m02 --control-plane --apiserver-advertise-address=192.168.39.18 --apiserver-bind-port=8443"
	I0916 18:11:24.557299  392787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zk8jja.80gx1qy4gw2fhz4q --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m02 --control-plane --apiserver-advertise-address=192.168.39.18 --apiserver-bind-port=8443": (23.730266599s)
	I0916 18:11:24.557356  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 18:11:25.076897  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-365438-m02 minikube.k8s.io/updated_at=2024_09_16T18_11_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=ha-365438 minikube.k8s.io/primary=false
	I0916 18:11:25.234370  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-365438-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 18:11:25.377310  392787 start.go:319] duration metric: took 24.707595419s to joinCluster
	I0916 18:11:25.377403  392787 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:11:25.377705  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:11:25.379208  392787 out.go:177] * Verifying Kubernetes components...
	I0916 18:11:25.380483  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:11:25.648629  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:11:25.671202  392787 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:11:25.671590  392787 kapi.go:59] client config for ha-365438: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt", KeyFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key", CAFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 18:11:25.671700  392787 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0916 18:11:25.672027  392787 node_ready.go:35] waiting up to 6m0s for node "ha-365438-m02" to be "Ready" ...
	I0916 18:11:25.672155  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:25.672168  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:25.672179  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:25.672185  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:25.685584  392787 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0916 18:11:26.172726  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:26.172751  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:26.172759  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:26.172763  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:26.176508  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:26.672501  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:26.672533  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:26.672543  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:26.672548  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:26.675715  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:27.173049  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:27.173081  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:27.173094  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:27.173100  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:27.178406  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:27.672326  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:27.672355  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:27.672367  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:27.672372  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:27.676185  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:27.676767  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:28.172971  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:28.172997  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:28.173006  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:28.173011  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:28.178047  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:28.673276  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:28.673300  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:28.673309  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:28.673313  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:28.677214  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:29.173079  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:29.173103  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:29.173111  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:29.173116  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:29.176619  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:29.672762  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:29.672789  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:29.672808  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:29.672814  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:29.676012  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:29.676888  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:30.173233  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:30.173259  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:30.173270  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:30.173277  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:30.176469  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:30.672536  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:30.672559  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:30.672567  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:30.672572  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:30.677943  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:31.172672  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:31.172700  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:31.172712  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:31.172719  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:31.177147  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:31.673251  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:31.673276  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:31.673285  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:31.673291  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:31.676778  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:31.677783  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:32.173152  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:32.173184  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:32.173197  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:32.173204  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:32.176580  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:32.672803  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:32.672828  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:32.672836  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:32.672841  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:32.736554  392787 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0916 18:11:33.173100  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:33.173123  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:33.173130  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:33.173135  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:33.176875  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:33.672479  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:33.672501  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:33.672510  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:33.672514  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:33.676089  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:34.173241  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:34.173273  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:34.173285  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:34.173291  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:34.176811  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:34.177466  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:34.672682  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:34.672706  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:34.672714  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:34.672718  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:34.676308  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:35.172941  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:35.172964  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:35.172973  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:35.172977  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:35.176362  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:35.672238  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:35.672264  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:35.672273  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:35.672277  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:35.676061  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:36.172969  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:36.173006  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:36.173015  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:36.173020  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:36.177121  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:36.177718  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:36.673112  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:36.673138  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:36.673147  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:36.673150  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:36.676423  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:37.172552  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:37.172578  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:37.172587  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:37.172591  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:37.176604  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:37.672936  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:37.672959  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:37.672970  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:37.672978  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:37.677363  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:38.172576  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:38.172601  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:38.172609  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:38.172615  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:38.176529  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:38.673253  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:38.673278  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:38.673289  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:38.673293  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:38.676581  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:38.677188  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:39.172551  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:39.172579  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:39.172588  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:39.172592  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:39.175634  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:39.672620  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:39.672644  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:39.672653  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:39.672657  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:39.676111  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:40.173176  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:40.173205  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:40.173216  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:40.173222  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:40.176742  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:40.672973  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:40.672998  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:40.673008  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:40.673014  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:40.676608  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:40.677266  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:41.173281  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:41.173307  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:41.173319  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:41.173323  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:41.177471  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:41.672288  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:41.672311  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:41.672320  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:41.672325  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:41.675832  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:42.172362  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:42.172390  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:42.172399  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:42.172403  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:42.176800  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:42.672515  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:42.672537  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:42.672546  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:42.672550  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:42.675794  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:43.172879  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:43.172905  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:43.172928  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:43.172935  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:43.176475  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:43.177117  392787 node_ready.go:53] node "ha-365438-m02" has status "Ready":"False"
	I0916 18:11:43.672959  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:43.672983  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:43.672991  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:43.672995  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:43.676640  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.172513  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.172536  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.172545  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.172549  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.176127  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.176873  392787 node_ready.go:49] node "ha-365438-m02" has status "Ready":"True"
	I0916 18:11:44.176898  392787 node_ready.go:38] duration metric: took 18.504846955s for node "ha-365438-m02" to be "Ready" ...
	I0916 18:11:44.176924  392787 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:11:44.177046  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:44.177058  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.177068  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.177075  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.181938  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:44.188581  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.188703  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9svk8
	I0916 18:11:44.188715  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.188726  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.188731  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.192571  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.193418  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.193435  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.193442  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.193448  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.196227  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.196963  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.196985  392787 pod_ready.go:82] duration metric: took 8.375088ms for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.196995  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.197070  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zh7sm
	I0916 18:11:44.197079  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.197086  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.197091  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.200092  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.201125  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.201142  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.201152  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.201157  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.203717  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.204184  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.204204  392787 pod_ready.go:82] duration metric: took 7.203495ms for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.204216  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.204349  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438
	I0916 18:11:44.204360  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.204367  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.204374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.207253  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.208118  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.208144  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.208152  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.208158  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.212944  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:44.213817  392787 pod_ready.go:93] pod "etcd-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.213842  392787 pod_ready.go:82] duration metric: took 9.614804ms for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.213855  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.213941  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m02
	I0916 18:11:44.213952  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.213961  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.213969  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.216855  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.217554  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.217569  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.217582  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.217587  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.219890  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.220524  392787 pod_ready.go:93] pod "etcd-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.220547  392787 pod_ready.go:82] duration metric: took 6.680434ms for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.220566  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.373021  392787 request.go:632] Waited for 152.359224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:11:44.373104  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:11:44.373110  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.373121  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.373130  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.376513  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.573537  392787 request.go:632] Waited for 196.392944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.573621  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:44.573632  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.573643  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.573651  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.576401  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:44.576942  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.576963  392787 pod_ready.go:82] duration metric: took 356.389594ms for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.576973  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.773145  392787 request.go:632] Waited for 196.07609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:11:44.773235  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:11:44.773242  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.773252  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.773257  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.776702  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.972986  392787 request.go:632] Waited for 195.41926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.973068  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:44.973073  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:44.973081  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:44.973087  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:44.976276  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:44.977082  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:44.977104  392787 pod_ready.go:82] duration metric: took 400.123141ms for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:44.977116  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.173208  392787 request.go:632] Waited for 195.990306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:11:45.173296  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:11:45.173304  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.173315  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.173326  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.177405  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:45.373413  392787 request.go:632] Waited for 195.387676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:45.373475  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:45.373480  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.373486  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.373492  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.377394  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:45.378061  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:45.378084  392787 pod_ready.go:82] duration metric: took 400.960417ms for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.378094  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.573146  392787 request.go:632] Waited for 194.944123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:11:45.573222  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:11:45.573230  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.573242  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.573253  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.576584  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:45.773583  392787 request.go:632] Waited for 196.311224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:45.773653  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:45.773660  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.773668  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.773682  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.776606  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:45.777194  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:45.777216  392787 pod_ready.go:82] duration metric: took 399.114761ms for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.777229  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:45.973560  392787 request.go:632] Waited for 196.245182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:11:45.973661  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:11:45.973673  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:45.973684  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:45.973693  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:45.976599  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:11:46.172598  392787 request.go:632] Waited for 195.271477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.172688  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.172695  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.172706  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.172712  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.176099  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.176619  392787 pod_ready.go:93] pod "kube-proxy-4rfbj" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:46.176641  392787 pod_ready.go:82] duration metric: took 399.404319ms for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.176654  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.372626  392787 request.go:632] Waited for 195.863267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:11:46.372710  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:11:46.372717  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.372729  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.372740  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.376508  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.573484  392787 request.go:632] Waited for 196.34687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:46.573568  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:46.573573  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.573580  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.573588  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.577714  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:46.578414  392787 pod_ready.go:93] pod "kube-proxy-nrqvf" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:46.578435  392787 pod_ready.go:82] duration metric: took 401.773565ms for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.578444  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.772584  392787 request.go:632] Waited for 194.03345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:11:46.772658  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:11:46.772666  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.772678  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.772687  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.775938  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.973046  392787 request.go:632] Waited for 196.365949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.973110  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:11:46.973115  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:46.973123  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:46.973127  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:46.976724  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:46.977346  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:46.977372  392787 pod_ready.go:82] duration metric: took 398.918632ms for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:46.977388  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:47.173516  392787 request.go:632] Waited for 196.023516ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:11:47.173584  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:11:47.173593  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.173603  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.173611  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.177050  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:47.373257  392787 request.go:632] Waited for 195.422038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:47.373411  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:11:47.373423  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.373434  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.373444  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.377220  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:47.377734  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:11:47.377757  392787 pod_ready.go:82] duration metric: took 400.356993ms for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:11:47.377771  392787 pod_ready.go:39] duration metric: took 3.2008242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:11:47.377792  392787 api_server.go:52] waiting for apiserver process to appear ...
	I0916 18:11:47.377906  392787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:11:47.394796  392787 api_server.go:72] duration metric: took 22.017327201s to wait for apiserver process to appear ...
	I0916 18:11:47.394830  392787 api_server.go:88] waiting for apiserver healthz status ...
	I0916 18:11:47.394858  392787 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0916 18:11:47.400272  392787 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0916 18:11:47.400351  392787 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0916 18:11:47.400359  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.400368  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.400374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.401426  392787 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 18:11:47.401533  392787 api_server.go:141] control plane version: v1.31.1
	I0916 18:11:47.401550  392787 api_server.go:131] duration metric: took 6.712256ms to wait for apiserver health ...
	I0916 18:11:47.401559  392787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 18:11:47.572998  392787 request.go:632] Waited for 171.354317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.573097  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.573106  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.573119  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.573128  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.585382  392787 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0916 18:11:47.591130  392787 system_pods.go:59] 17 kube-system pods found
	I0916 18:11:47.591174  392787 system_pods.go:61] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:11:47.591182  392787 system_pods.go:61] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:11:47.591186  392787 system_pods.go:61] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:11:47.591189  392787 system_pods.go:61] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:11:47.591193  392787 system_pods.go:61] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:11:47.591196  392787 system_pods.go:61] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:11:47.591205  392787 system_pods.go:61] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:11:47.591213  392787 system_pods.go:61] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:11:47.591216  392787 system_pods.go:61] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:11:47.591219  392787 system_pods.go:61] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:11:47.591222  392787 system_pods.go:61] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:11:47.591226  392787 system_pods.go:61] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:11:47.591229  392787 system_pods.go:61] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:11:47.591232  392787 system_pods.go:61] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:11:47.591235  392787 system_pods.go:61] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:11:47.591238  392787 system_pods.go:61] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:11:47.591241  392787 system_pods.go:61] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:11:47.591247  392787 system_pods.go:74] duration metric: took 189.679883ms to wait for pod list to return data ...
	I0916 18:11:47.591257  392787 default_sa.go:34] waiting for default service account to be created ...
	I0916 18:11:47.772686  392787 request.go:632] Waited for 181.316746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:11:47.772751  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:11:47.772756  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.772764  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.772769  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.776463  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:11:47.776725  392787 default_sa.go:45] found service account: "default"
	I0916 18:11:47.776744  392787 default_sa.go:55] duration metric: took 185.478694ms for default service account to be created ...
	I0916 18:11:47.776752  392787 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 18:11:47.972968  392787 request.go:632] Waited for 196.10847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.973076  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:11:47.973087  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:47.973098  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:47.973109  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:47.978423  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:11:47.983687  392787 system_pods.go:86] 17 kube-system pods found
	I0916 18:11:47.983722  392787 system_pods.go:89] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:11:47.983728  392787 system_pods.go:89] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:11:47.983737  392787 system_pods.go:89] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:11:47.983742  392787 system_pods.go:89] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:11:47.983747  392787 system_pods.go:89] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:11:47.983750  392787 system_pods.go:89] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:11:47.983753  392787 system_pods.go:89] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:11:47.983757  392787 system_pods.go:89] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:11:47.983760  392787 system_pods.go:89] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:11:47.983764  392787 system_pods.go:89] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:11:47.983767  392787 system_pods.go:89] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:11:47.983770  392787 system_pods.go:89] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:11:47.983773  392787 system_pods.go:89] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:11:47.983777  392787 system_pods.go:89] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:11:47.983782  392787 system_pods.go:89] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:11:47.983785  392787 system_pods.go:89] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:11:47.983788  392787 system_pods.go:89] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:11:47.983795  392787 system_pods.go:126] duration metric: took 207.036892ms to wait for k8s-apps to be running ...
	I0916 18:11:47.983805  392787 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 18:11:47.983851  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:11:47.999009  392787 system_svc.go:56] duration metric: took 15.186653ms WaitForService to wait for kubelet
	I0916 18:11:47.999054  392787 kubeadm.go:582] duration metric: took 22.621593946s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:11:47.999081  392787 node_conditions.go:102] verifying NodePressure condition ...
	I0916 18:11:48.173654  392787 request.go:632] Waited for 174.446242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0916 18:11:48.173725  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0916 18:11:48.173733  392787 round_trippers.go:469] Request Headers:
	I0916 18:11:48.173745  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:11:48.173752  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:11:48.178018  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:11:48.178795  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:11:48.178824  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:11:48.178840  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:11:48.178845  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:11:48.178851  392787 node_conditions.go:105] duration metric: took 179.764557ms to run NodePressure ...
	I0916 18:11:48.178866  392787 start.go:241] waiting for startup goroutines ...
	I0916 18:11:48.178904  392787 start.go:255] writing updated cluster config ...
	I0916 18:11:48.181519  392787 out.go:201] 
	I0916 18:11:48.183337  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:11:48.183448  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:11:48.185191  392787 out.go:177] * Starting "ha-365438-m03" control-plane node in "ha-365438" cluster
	I0916 18:11:48.186550  392787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:11:48.186588  392787 cache.go:56] Caching tarball of preloaded images
	I0916 18:11:48.186760  392787 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:11:48.186776  392787 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:11:48.186919  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:11:48.187167  392787 start.go:360] acquireMachinesLock for ha-365438-m03: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:11:48.187230  392787 start.go:364] duration metric: took 33.205µs to acquireMachinesLock for "ha-365438-m03"
	I0916 18:11:48.187269  392787 start.go:93] Provisioning new machine with config: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:11:48.187461  392787 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0916 18:11:48.189846  392787 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 18:11:48.189969  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:11:48.190012  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:11:48.205644  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35543
	I0916 18:11:48.206157  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:11:48.206787  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:11:48.206812  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:11:48.207140  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:11:48.207336  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:11:48.207480  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:11:48.207616  392787 start.go:159] libmachine.API.Create for "ha-365438" (driver="kvm2")
	I0916 18:11:48.207643  392787 client.go:168] LocalClient.Create starting
	I0916 18:11:48.207672  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 18:11:48.207708  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:11:48.207722  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:11:48.207796  392787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 18:11:48.207815  392787 main.go:141] libmachine: Decoding PEM data...
	I0916 18:11:48.207826  392787 main.go:141] libmachine: Parsing certificate...
	I0916 18:11:48.207842  392787 main.go:141] libmachine: Running pre-create checks...
	I0916 18:11:48.207850  392787 main.go:141] libmachine: (ha-365438-m03) Calling .PreCreateCheck
	I0916 18:11:48.207998  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetConfigRaw
	I0916 18:11:48.208444  392787 main.go:141] libmachine: Creating machine...
	I0916 18:11:48.208458  392787 main.go:141] libmachine: (ha-365438-m03) Calling .Create
	I0916 18:11:48.208610  392787 main.go:141] libmachine: (ha-365438-m03) Creating KVM machine...
	I0916 18:11:48.209971  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found existing default KVM network
	I0916 18:11:48.210053  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found existing private KVM network mk-ha-365438
	I0916 18:11:48.210156  392787 main.go:141] libmachine: (ha-365438-m03) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03 ...
	I0916 18:11:48.210193  392787 main.go:141] libmachine: (ha-365438-m03) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 18:11:48.210295  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.210172  393559 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:11:48.210435  392787 main.go:141] libmachine: (ha-365438-m03) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 18:11:48.483007  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.482852  393559 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa...
	I0916 18:11:48.658840  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.658716  393559 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/ha-365438-m03.rawdisk...
	I0916 18:11:48.658867  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Writing magic tar header
	I0916 18:11:48.658878  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Writing SSH key tar header
	I0916 18:11:48.658889  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:48.658828  393559 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03 ...
	I0916 18:11:48.658968  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03
	I0916 18:11:48.659000  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03 (perms=drwx------)
	I0916 18:11:48.659011  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 18:11:48.659026  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:11:48.659038  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 18:11:48.659048  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 18:11:48.659077  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 18:11:48.659089  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 18:11:48.659103  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 18:11:48.659116  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 18:11:48.659123  392787 main.go:141] libmachine: (ha-365438-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 18:11:48.659131  392787 main.go:141] libmachine: (ha-365438-m03) Creating domain...
	I0916 18:11:48.659140  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home/jenkins
	I0916 18:11:48.659150  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Checking permissions on dir: /home
	I0916 18:11:48.659162  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Skipping /home - not owner
	I0916 18:11:48.659979  392787 main.go:141] libmachine: (ha-365438-m03) define libvirt domain using xml: 
	I0916 18:11:48.660009  392787 main.go:141] libmachine: (ha-365438-m03) <domain type='kvm'>
	I0916 18:11:48.660019  392787 main.go:141] libmachine: (ha-365438-m03)   <name>ha-365438-m03</name>
	I0916 18:11:48.660028  392787 main.go:141] libmachine: (ha-365438-m03)   <memory unit='MiB'>2200</memory>
	I0916 18:11:48.660036  392787 main.go:141] libmachine: (ha-365438-m03)   <vcpu>2</vcpu>
	I0916 18:11:48.660045  392787 main.go:141] libmachine: (ha-365438-m03)   <features>
	I0916 18:11:48.660056  392787 main.go:141] libmachine: (ha-365438-m03)     <acpi/>
	I0916 18:11:48.660065  392787 main.go:141] libmachine: (ha-365438-m03)     <apic/>
	I0916 18:11:48.660076  392787 main.go:141] libmachine: (ha-365438-m03)     <pae/>
	I0916 18:11:48.660084  392787 main.go:141] libmachine: (ha-365438-m03)     
	I0916 18:11:48.660120  392787 main.go:141] libmachine: (ha-365438-m03)   </features>
	I0916 18:11:48.660143  392787 main.go:141] libmachine: (ha-365438-m03)   <cpu mode='host-passthrough'>
	I0916 18:11:48.660155  392787 main.go:141] libmachine: (ha-365438-m03)   
	I0916 18:11:48.660164  392787 main.go:141] libmachine: (ha-365438-m03)   </cpu>
	I0916 18:11:48.660175  392787 main.go:141] libmachine: (ha-365438-m03)   <os>
	I0916 18:11:48.660190  392787 main.go:141] libmachine: (ha-365438-m03)     <type>hvm</type>
	I0916 18:11:48.660201  392787 main.go:141] libmachine: (ha-365438-m03)     <boot dev='cdrom'/>
	I0916 18:11:48.660209  392787 main.go:141] libmachine: (ha-365438-m03)     <boot dev='hd'/>
	I0916 18:11:48.660220  392787 main.go:141] libmachine: (ha-365438-m03)     <bootmenu enable='no'/>
	I0916 18:11:48.660229  392787 main.go:141] libmachine: (ha-365438-m03)   </os>
	I0916 18:11:48.660239  392787 main.go:141] libmachine: (ha-365438-m03)   <devices>
	I0916 18:11:48.660246  392787 main.go:141] libmachine: (ha-365438-m03)     <disk type='file' device='cdrom'>
	I0916 18:11:48.660261  392787 main.go:141] libmachine: (ha-365438-m03)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/boot2docker.iso'/>
	I0916 18:11:48.660272  392787 main.go:141] libmachine: (ha-365438-m03)       <target dev='hdc' bus='scsi'/>
	I0916 18:11:48.660283  392787 main.go:141] libmachine: (ha-365438-m03)       <readonly/>
	I0916 18:11:48.660296  392787 main.go:141] libmachine: (ha-365438-m03)     </disk>
	I0916 18:11:48.660308  392787 main.go:141] libmachine: (ha-365438-m03)     <disk type='file' device='disk'>
	I0916 18:11:48.660333  392787 main.go:141] libmachine: (ha-365438-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 18:11:48.660349  392787 main.go:141] libmachine: (ha-365438-m03)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/ha-365438-m03.rawdisk'/>
	I0916 18:11:48.660359  392787 main.go:141] libmachine: (ha-365438-m03)       <target dev='hda' bus='virtio'/>
	I0916 18:11:48.660368  392787 main.go:141] libmachine: (ha-365438-m03)     </disk>
	I0916 18:11:48.660379  392787 main.go:141] libmachine: (ha-365438-m03)     <interface type='network'>
	I0916 18:11:48.660396  392787 main.go:141] libmachine: (ha-365438-m03)       <source network='mk-ha-365438'/>
	I0916 18:11:48.660407  392787 main.go:141] libmachine: (ha-365438-m03)       <model type='virtio'/>
	I0916 18:11:48.660417  392787 main.go:141] libmachine: (ha-365438-m03)     </interface>
	I0916 18:11:48.660444  392787 main.go:141] libmachine: (ha-365438-m03)     <interface type='network'>
	I0916 18:11:48.660454  392787 main.go:141] libmachine: (ha-365438-m03)       <source network='default'/>
	I0916 18:11:48.660463  392787 main.go:141] libmachine: (ha-365438-m03)       <model type='virtio'/>
	I0916 18:11:48.660472  392787 main.go:141] libmachine: (ha-365438-m03)     </interface>
	I0916 18:11:48.660482  392787 main.go:141] libmachine: (ha-365438-m03)     <serial type='pty'>
	I0916 18:11:48.660491  392787 main.go:141] libmachine: (ha-365438-m03)       <target port='0'/>
	I0916 18:11:48.660502  392787 main.go:141] libmachine: (ha-365438-m03)     </serial>
	I0916 18:11:48.660512  392787 main.go:141] libmachine: (ha-365438-m03)     <console type='pty'>
	I0916 18:11:48.660523  392787 main.go:141] libmachine: (ha-365438-m03)       <target type='serial' port='0'/>
	I0916 18:11:48.660532  392787 main.go:141] libmachine: (ha-365438-m03)     </console>
	I0916 18:11:48.660549  392787 main.go:141] libmachine: (ha-365438-m03)     <rng model='virtio'>
	I0916 18:11:48.660567  392787 main.go:141] libmachine: (ha-365438-m03)       <backend model='random'>/dev/random</backend>
	I0916 18:11:48.660579  392787 main.go:141] libmachine: (ha-365438-m03)     </rng>
	I0916 18:11:48.660595  392787 main.go:141] libmachine: (ha-365438-m03)     
	I0916 18:11:48.660609  392787 main.go:141] libmachine: (ha-365438-m03)     
	I0916 18:11:48.660618  392787 main.go:141] libmachine: (ha-365438-m03)   </devices>
	I0916 18:11:48.660628  392787 main.go:141] libmachine: (ha-365438-m03) </domain>
	I0916 18:11:48.660640  392787 main.go:141] libmachine: (ha-365438-m03) 
	I0916 18:11:48.667531  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:50:76:20 in network default
	I0916 18:11:48.668111  392787 main.go:141] libmachine: (ha-365438-m03) Ensuring networks are active...
	I0916 18:11:48.668134  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:48.668790  392787 main.go:141] libmachine: (ha-365438-m03) Ensuring network default is active
	I0916 18:11:48.669178  392787 main.go:141] libmachine: (ha-365438-m03) Ensuring network mk-ha-365438 is active
	I0916 18:11:48.669602  392787 main.go:141] libmachine: (ha-365438-m03) Getting domain xml...
	I0916 18:11:48.670284  392787 main.go:141] libmachine: (ha-365438-m03) Creating domain...
	I0916 18:11:49.916314  392787 main.go:141] libmachine: (ha-365438-m03) Waiting to get IP...
	I0916 18:11:49.917055  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:49.917486  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:49.917525  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:49.917469  393559 retry.go:31] will retry after 198.51809ms: waiting for machine to come up
	I0916 18:11:50.117986  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:50.118535  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:50.118560  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:50.118479  393559 retry.go:31] will retry after 368.043611ms: waiting for machine to come up
	I0916 18:11:50.488070  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:50.488581  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:50.488610  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:50.488537  393559 retry.go:31] will retry after 388.359286ms: waiting for machine to come up
	I0916 18:11:50.877948  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:50.878401  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:50.878490  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:50.878376  393559 retry.go:31] will retry after 367.062779ms: waiting for machine to come up
	I0916 18:11:51.246933  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:51.247515  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:51.247548  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:51.247463  393559 retry.go:31] will retry after 517.788094ms: waiting for machine to come up
	I0916 18:11:51.767063  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:51.767627  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:51.767650  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:51.767582  393559 retry.go:31] will retry after 836.830273ms: waiting for machine to come up
	I0916 18:11:52.606349  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:52.606737  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:52.606766  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:52.606704  393559 retry.go:31] will retry after 884.544993ms: waiting for machine to come up
	I0916 18:11:53.493201  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:53.493736  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:53.493762  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:53.493701  393559 retry.go:31] will retry after 1.007434851s: waiting for machine to come up
	I0916 18:11:54.503181  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:54.503551  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:54.503600  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:54.503511  393559 retry.go:31] will retry after 1.759545297s: waiting for machine to come up
	I0916 18:11:56.264502  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:56.264997  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:56.265029  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:56.264905  393559 retry.go:31] will retry after 2.178225549s: waiting for machine to come up
	I0916 18:11:58.444424  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:11:58.444913  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:11:58.444952  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:11:58.444850  393559 retry.go:31] will retry after 2.536690522s: waiting for machine to come up
	I0916 18:12:00.982928  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:00.983341  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:12:00.983364  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:12:00.983305  393559 retry.go:31] will retry after 2.6089067s: waiting for machine to come up
	I0916 18:12:03.593830  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:03.594390  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:12:03.594413  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:12:03.594324  393559 retry.go:31] will retry after 4.326497593s: waiting for machine to come up
	I0916 18:12:07.925823  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:07.926196  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find current IP address of domain ha-365438-m03 in network mk-ha-365438
	I0916 18:12:07.926220  392787 main.go:141] libmachine: (ha-365438-m03) DBG | I0916 18:12:07.926153  393559 retry.go:31] will retry after 4.753851469s: waiting for machine to come up
	I0916 18:12:12.684646  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.685156  392787 main.go:141] libmachine: (ha-365438-m03) Found IP for machine: 192.168.39.231
	I0916 18:12:12.685182  392787 main.go:141] libmachine: (ha-365438-m03) Reserving static IP address...
	I0916 18:12:12.685195  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has current primary IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.685590  392787 main.go:141] libmachine: (ha-365438-m03) DBG | unable to find host DHCP lease matching {name: "ha-365438-m03", mac: "52:54:00:ac:e5:94", ip: "192.168.39.231"} in network mk-ha-365438
	I0916 18:12:12.761275  392787 main.go:141] libmachine: (ha-365438-m03) Reserved static IP address: 192.168.39.231
	I0916 18:12:12.761310  392787 main.go:141] libmachine: (ha-365438-m03) Waiting for SSH to be available...
	I0916 18:12:12.761319  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Getting to WaitForSSH function...
	I0916 18:12:12.764567  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.765135  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:12.765161  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.765395  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Using SSH client type: external
	I0916 18:12:12.765421  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa (-rw-------)
	I0916 18:12:12.765449  392787 main.go:141] libmachine: (ha-365438-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.231 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 18:12:12.765467  392787 main.go:141] libmachine: (ha-365438-m03) DBG | About to run SSH command:
	I0916 18:12:12.765483  392787 main.go:141] libmachine: (ha-365438-m03) DBG | exit 0
	I0916 18:12:12.893201  392787 main.go:141] libmachine: (ha-365438-m03) DBG | SSH cmd err, output: <nil>: 
	I0916 18:12:12.893458  392787 main.go:141] libmachine: (ha-365438-m03) KVM machine creation complete!
	I0916 18:12:12.893817  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetConfigRaw
	I0916 18:12:12.894411  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:12.894635  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:12.894798  392787 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 18:12:12.894816  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:12:12.896330  392787 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 18:12:12.896345  392787 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 18:12:12.896352  392787 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 18:12:12.896360  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:12.898798  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.899139  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:12.899167  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:12.899350  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:12.899563  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:12.899722  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:12.899864  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:12.900011  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:12.900269  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:12.900281  392787 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 18:12:13.008569  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:12:13.008593  392787 main.go:141] libmachine: Detecting the provisioner...
	I0916 18:12:13.008601  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.011614  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.012064  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.012095  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.012238  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.012487  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.012691  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.012823  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.012999  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.013182  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.013194  392787 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 18:12:13.122122  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 18:12:13.122217  392787 main.go:141] libmachine: found compatible host: buildroot
	I0916 18:12:13.122231  392787 main.go:141] libmachine: Provisioning with buildroot...
	I0916 18:12:13.122246  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:12:13.122508  392787 buildroot.go:166] provisioning hostname "ha-365438-m03"
	I0916 18:12:13.122543  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:12:13.122756  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.125571  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.126197  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.126227  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.126608  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.126864  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.127078  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.127268  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.127497  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.127714  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.127727  392787 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438-m03 && echo "ha-365438-m03" | sudo tee /etc/hostname
	I0916 18:12:13.252848  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438-m03
	
	I0916 18:12:13.252889  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.255720  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.256099  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.256131  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.256322  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.256701  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.256885  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.257073  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.257255  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.257425  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.257442  392787 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:12:13.375127  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:12:13.375159  392787 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:12:13.375183  392787 buildroot.go:174] setting up certificates
	I0916 18:12:13.375195  392787 provision.go:84] configureAuth start
	I0916 18:12:13.375208  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetMachineName
	I0916 18:12:13.375530  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:13.378260  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.378510  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.378532  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.378673  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.380726  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.381127  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.381157  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.381308  392787 provision.go:143] copyHostCerts
	I0916 18:12:13.381338  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:12:13.381371  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:12:13.381380  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:12:13.381447  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:12:13.381524  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:12:13.381541  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:12:13.381547  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:12:13.381575  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:12:13.381636  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:12:13.381666  392787 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:12:13.381679  392787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:12:13.381713  392787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:12:13.381772  392787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438-m03 san=[127.0.0.1 192.168.39.231 ha-365438-m03 localhost minikube]
	I0916 18:12:13.515688  392787 provision.go:177] copyRemoteCerts
	I0916 18:12:13.515749  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:12:13.515777  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.518663  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.518955  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.518976  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.519173  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.519363  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.519503  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.519682  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:13.603320  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:12:13.603411  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 18:12:13.629247  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:12:13.629317  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 18:12:13.654026  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:12:13.654116  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:12:13.680776  392787 provision.go:87] duration metric: took 305.564483ms to configureAuth
	I0916 18:12:13.680813  392787 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:12:13.681128  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:12:13.681236  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.684310  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.684738  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.684769  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.684966  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.685174  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.685337  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.685488  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.685647  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:13.685859  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:13.685885  392787 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:12:13.926138  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:12:13.926174  392787 main.go:141] libmachine: Checking connection to Docker...
	I0916 18:12:13.926185  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetURL
	I0916 18:12:13.927640  392787 main.go:141] libmachine: (ha-365438-m03) DBG | Using libvirt version 6000000
	I0916 18:12:13.929849  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.930175  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.930198  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.930397  392787 main.go:141] libmachine: Docker is up and running!
	I0916 18:12:13.930418  392787 main.go:141] libmachine: Reticulating splines...
	I0916 18:12:13.930426  392787 client.go:171] duration metric: took 25.722776003s to LocalClient.Create
	I0916 18:12:13.930449  392787 start.go:167] duration metric: took 25.722834457s to libmachine.API.Create "ha-365438"
	I0916 18:12:13.930458  392787 start.go:293] postStartSetup for "ha-365438-m03" (driver="kvm2")
	I0916 18:12:13.930468  392787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:12:13.930487  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:13.930720  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:12:13.930744  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:13.932830  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.933169  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:13.933192  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:13.933321  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:13.933491  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:13.933636  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:13.933751  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:14.021119  392787 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:12:14.025372  392787 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:12:14.025404  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:12:14.025472  392787 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:12:14.025563  392787 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:12:14.025577  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:12:14.025704  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:12:14.037240  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:12:14.063223  392787 start.go:296] duration metric: took 132.749962ms for postStartSetup
	I0916 18:12:14.063293  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetConfigRaw
	I0916 18:12:14.064019  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:14.066928  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.067342  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.067371  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.067659  392787 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:12:14.067883  392787 start.go:128] duration metric: took 25.880405444s to createHost
	I0916 18:12:14.067918  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:14.070357  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.070728  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.070757  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.070893  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:14.071079  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.071222  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.071322  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:14.071492  392787 main.go:141] libmachine: Using SSH client type: native
	I0916 18:12:14.071677  392787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.231 22 <nil> <nil>}
	I0916 18:12:14.071694  392787 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:12:14.182399  392787 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510334.158889156
	
	I0916 18:12:14.182427  392787 fix.go:216] guest clock: 1726510334.158889156
	I0916 18:12:14.182437  392787 fix.go:229] Guest: 2024-09-16 18:12:14.158889156 +0000 UTC Remote: 2024-09-16 18:12:14.067900348 +0000 UTC m=+148.242374056 (delta=90.988808ms)
	I0916 18:12:14.182460  392787 fix.go:200] guest clock delta is within tolerance: 90.988808ms
	I0916 18:12:14.182467  392787 start.go:83] releasing machines lock for "ha-365438-m03", held for 25.995224257s
	I0916 18:12:14.182489  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.182814  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:14.186304  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.186750  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.186783  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.189603  392787 out.go:177] * Found network options:
	I0916 18:12:14.191277  392787 out.go:177]   - NO_PROXY=192.168.39.165,192.168.39.18
	W0916 18:12:14.193262  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 18:12:14.193294  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:12:14.193318  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.194050  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.194279  392787 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:12:14.194421  392787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:12:14.194468  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	W0916 18:12:14.194506  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 18:12:14.194531  392787 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 18:12:14.194609  392787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:12:14.194635  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:12:14.197775  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.197801  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.198169  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.198199  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.198225  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:14.198245  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:14.198305  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:14.198455  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:12:14.198606  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.198636  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:12:14.198775  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:14.198783  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:12:14.198998  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:14.198997  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:12:14.448954  392787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:12:14.455918  392787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:12:14.456003  392787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:12:14.476545  392787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 18:12:14.476582  392787 start.go:495] detecting cgroup driver to use...
	I0916 18:12:14.476663  392787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:12:14.496278  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:12:14.512278  392787 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:12:14.512337  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:12:14.527627  392787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:12:14.542070  392787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:12:14.680011  392787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:12:14.828286  392787 docker.go:233] disabling docker service ...
	I0916 18:12:14.828379  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:12:14.844496  392787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:12:14.859761  392787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:12:14.993508  392787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:12:15.124977  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:12:15.140329  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:12:15.160341  392787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:12:15.160420  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.173484  392787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:12:15.173555  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.186345  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.200092  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.211657  392787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:12:15.223390  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.235199  392787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.254654  392787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:12:15.266113  392787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:12:15.276891  392787 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 18:12:15.277002  392787 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 18:12:15.291279  392787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:12:15.301766  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:12:15.417275  392787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:12:15.521133  392787 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:12:15.521217  392787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:12:15.526494  392787 start.go:563] Will wait 60s for crictl version
	I0916 18:12:15.526576  392787 ssh_runner.go:195] Run: which crictl
	I0916 18:12:15.530531  392787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:12:15.574054  392787 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:12:15.574153  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:12:15.603221  392787 ssh_runner.go:195] Run: crio --version
	I0916 18:12:15.634208  392787 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:12:15.635691  392787 out.go:177]   - env NO_PROXY=192.168.39.165
	I0916 18:12:15.637183  392787 out.go:177]   - env NO_PROXY=192.168.39.165,192.168.39.18
	I0916 18:12:15.638493  392787 main.go:141] libmachine: (ha-365438-m03) Calling .GetIP
	I0916 18:12:15.641228  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:15.641576  392787 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:12:15.641606  392787 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:12:15.641841  392787 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:12:15.646120  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:12:15.659858  392787 mustload.go:65] Loading cluster: ha-365438
	I0916 18:12:15.660161  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:12:15.660526  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:12:15.660592  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:12:15.676323  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0916 18:12:15.676844  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:12:15.677362  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:12:15.677397  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:12:15.677786  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:12:15.677968  392787 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:12:15.679484  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:12:15.679783  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:12:15.679823  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:12:15.696055  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I0916 18:12:15.696528  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:12:15.697061  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:12:15.697081  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:12:15.697427  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:12:15.697663  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:12:15.697844  392787 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.231
	I0916 18:12:15.697856  392787 certs.go:194] generating shared ca certs ...
	I0916 18:12:15.697875  392787 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:12:15.698039  392787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:12:15.698100  392787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:12:15.698113  392787 certs.go:256] generating profile certs ...
	I0916 18:12:15.698220  392787 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:12:15.698250  392787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c
	I0916 18:12:15.698275  392787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.18 192.168.39.231 192.168.39.254]
	I0916 18:12:15.780429  392787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c ...
	I0916 18:12:15.780465  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c: {Name:mk92bfd88419c53d2051fea6e814cf12a8ab551b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:12:15.780648  392787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c ...
	I0916 18:12:15.780660  392787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c: {Name:mk93d7a277a030e4c0050a92c3af54e7af5dd6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:12:15.780749  392787 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.056d173c -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:12:15.780891  392787 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.056d173c -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:12:15.781064  392787 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:12:15.781082  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:12:15.781096  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:12:15.781109  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:12:15.781122  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:12:15.781137  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:12:15.781149  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:12:15.781161  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:12:15.801031  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:12:15.801129  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:12:15.801166  392787 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:12:15.801176  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:12:15.801199  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:12:15.801223  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:12:15.801245  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:12:15.801286  392787 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:12:15.801315  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:15.801336  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:12:15.801351  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:12:15.801389  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:12:15.804809  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:15.805305  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:12:15.805349  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:15.805590  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:12:15.805809  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:12:15.805990  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:12:15.806169  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:12:15.885366  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 18:12:15.891763  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 18:12:15.904394  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 18:12:15.909199  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0916 18:12:15.921290  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 18:12:15.926248  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 18:12:15.937817  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 18:12:15.942446  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0916 18:12:15.954821  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 18:12:15.960948  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 18:12:15.972262  392787 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 18:12:15.976972  392787 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0916 18:12:15.989284  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:12:16.017611  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:12:16.044622  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:12:16.074799  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:12:16.101738  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 18:12:16.128149  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 18:12:16.156672  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:12:16.184029  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:12:16.211535  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:12:16.239282  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:12:16.265500  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:12:16.291138  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 18:12:16.310218  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0916 18:12:16.328559  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 18:12:16.345749  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0916 18:12:16.363416  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 18:12:16.381315  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0916 18:12:16.398951  392787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 18:12:16.417890  392787 ssh_runner.go:195] Run: openssl version
	I0916 18:12:16.423972  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:12:16.435476  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:16.440311  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:16.440406  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:12:16.446585  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:12:16.459570  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:12:16.471240  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:12:16.476073  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:12:16.476170  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:12:16.482297  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:12:16.495192  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:12:16.506688  392787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:12:16.511325  392787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:12:16.511392  392787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:12:16.517616  392787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:12:16.529378  392787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:12:16.533743  392787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 18:12:16.533806  392787 kubeadm.go:934] updating node {m03 192.168.39.231 8443 v1.31.1 crio true true} ...
	I0916 18:12:16.533904  392787 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.231
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:12:16.533930  392787 kube-vip.go:115] generating kube-vip config ...
	I0916 18:12:16.533973  392787 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:12:16.550457  392787 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:12:16.550538  392787 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:12:16.550597  392787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:12:16.561170  392787 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 18:12:16.561251  392787 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 18:12:16.571686  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0916 18:12:16.571687  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0916 18:12:16.571691  392787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 18:12:16.571743  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:12:16.571782  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:12:16.571804  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 18:12:16.571727  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:12:16.571885  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 18:12:16.590503  392787 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:12:16.590537  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 18:12:16.590565  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 18:12:16.590504  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 18:12:16.590613  392787 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 18:12:16.590612  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 18:12:16.617764  392787 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 18:12:16.617812  392787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 18:12:17.546617  392787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 18:12:17.556651  392787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 18:12:17.574637  392787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:12:17.594355  392787 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 18:12:17.611343  392787 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:12:17.615500  392787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 18:12:17.628546  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:12:17.765111  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:12:17.785384  392787 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:12:17.785722  392787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:12:17.785763  392787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:12:17.801417  392787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I0916 18:12:17.801922  392787 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:12:17.802503  392787 main.go:141] libmachine: Using API Version  1
	I0916 18:12:17.802528  392787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:12:17.802875  392787 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:12:17.803094  392787 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:12:17.803262  392787 start.go:317] joinCluster: &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:12:17.803394  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 18:12:17.803411  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:12:17.806440  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:17.806874  392787 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:12:17.806904  392787 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:12:17.807028  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:12:17.807213  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:12:17.807369  392787 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:12:17.807511  392787 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:12:17.973906  392787 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:12:17.973976  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7t40zy.1tbwssoyalawrr0f --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m03 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443"
	I0916 18:12:40.902025  392787 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7t40zy.1tbwssoyalawrr0f --discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-365438-m03 --control-plane --apiserver-advertise-address=192.168.39.231 --apiserver-bind-port=8443": (22.928017131s)
	I0916 18:12:40.902078  392787 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 18:12:41.483205  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-365438-m03 minikube.k8s.io/updated_at=2024_09_16T18_12_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=ha-365438 minikube.k8s.io/primary=false
	I0916 18:12:41.627686  392787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-365438-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 18:12:41.741188  392787 start.go:319] duration metric: took 23.937923236s to joinCluster
	I0916 18:12:41.741277  392787 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 18:12:41.741618  392787 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:12:41.742659  392787 out.go:177] * Verifying Kubernetes components...
	I0916 18:12:41.744104  392787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:12:42.052755  392787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:12:42.081527  392787 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:12:42.081873  392787 kapi.go:59] client config for ha-365438: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.crt", KeyFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key", CAFile:"/home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 18:12:42.081981  392787 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.165:8443
	I0916 18:12:42.082316  392787 node_ready.go:35] waiting up to 6m0s for node "ha-365438-m03" to be "Ready" ...
	I0916 18:12:42.082430  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:42.082445  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:42.082456  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:42.082461  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:42.085771  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:42.582782  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:42.582816  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:42.582828  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:42.582836  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:42.586265  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:43.082552  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:43.082576  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:43.082584  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:43.082588  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:43.086205  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:43.583150  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:43.583180  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:43.583192  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:43.583199  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:43.587018  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:44.083045  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:44.083067  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:44.083076  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:44.083080  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:44.087110  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:12:44.087820  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:44.583130  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:44.583156  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:44.583168  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:44.583174  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:44.586276  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:45.083344  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:45.083374  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:45.083386  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:45.083391  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:45.086404  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:45.583428  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:45.583454  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:45.583466  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:45.583471  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:45.586835  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:46.083067  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:46.083098  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:46.083109  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:46.083117  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:46.086876  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:46.583356  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:46.583383  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:46.583395  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:46.583408  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:46.586623  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:46.587362  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:47.082628  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:47.082655  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:47.082664  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:47.082667  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:47.086030  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:47.583300  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:47.583325  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:47.583339  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:47.583343  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:47.587136  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:48.083231  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:48.083253  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:48.083261  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:48.083266  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:48.086866  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:48.583216  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:48.583252  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:48.583274  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:48.583283  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:48.590890  392787 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 18:12:48.591473  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:49.082703  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:49.082727  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:49.082736  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:49.082741  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:49.086644  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:49.583567  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:49.583597  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:49.583606  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:49.583611  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:49.586911  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:50.083340  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:50.083362  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:50.083370  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:50.083374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:50.088634  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:12:50.583515  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:50.583540  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:50.583548  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:50.583552  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:50.587120  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:51.083268  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:51.083301  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:51.083311  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:51.083316  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:51.086864  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:51.087409  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:51.582749  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:51.582775  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:51.582786  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:51.582790  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:51.586554  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:52.083588  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:52.083617  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:52.083627  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:52.083632  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:52.087058  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:52.582622  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:52.582645  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:52.582659  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:52.582664  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:52.586165  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:53.083036  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:53.083059  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:53.083067  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:53.083072  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:53.086494  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:53.583553  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:53.583578  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:53.583589  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:53.583593  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:53.587308  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:53.587970  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:54.083329  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:54.083354  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:54.083364  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:54.083369  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:54.088254  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:12:54.583186  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:54.583210  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:54.583219  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:54.583223  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:54.586894  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:55.082776  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:55.082801  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:55.082810  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:55.082815  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:55.086315  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:55.583567  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:55.583591  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:55.583600  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:55.583609  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:55.587584  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:55.588203  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:56.082815  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:56.082839  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:56.082848  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:56.082853  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:56.086288  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:56.583253  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:56.583281  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:56.583293  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:56.583299  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:56.588417  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:12:57.083394  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:57.083418  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:57.083427  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:57.083432  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:57.086909  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:57.582896  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:57.582927  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:57.582939  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:57.582945  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:57.586090  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:58.082726  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:58.082755  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:58.082767  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:58.082774  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:58.086171  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:58.086896  392787 node_ready.go:53] node "ha-365438-m03" has status "Ready":"False"
	I0916 18:12:58.583401  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:58.583431  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:58.583444  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:58.583454  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:58.587059  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.083306  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.083332  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.083339  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.083343  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.086672  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.582558  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.582582  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.582594  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.582597  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.585909  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.586744  392787 node_ready.go:49] node "ha-365438-m03" has status "Ready":"True"
	I0916 18:12:59.586772  392787 node_ready.go:38] duration metric: took 17.504427469s for node "ha-365438-m03" to be "Ready" ...
	I0916 18:12:59.586785  392787 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:12:59.586883  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:12:59.586894  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.586905  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.586910  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.595755  392787 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 18:12:59.603593  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.603688  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9svk8
	I0916 18:12:59.603697  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.603705  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.603709  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.606559  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.607268  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:12:59.607287  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.607298  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.607303  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.610335  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.610779  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.610795  392787 pod_ready.go:82] duration metric: took 7.175735ms for pod "coredns-7c65d6cfc9-9svk8" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.610806  392787 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.610866  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-zh7sm
	I0916 18:12:59.610876  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.610886  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.610892  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.613726  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.614410  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:12:59.614427  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.614437  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.614442  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.616779  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.617280  392787 pod_ready.go:93] pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.617301  392787 pod_ready.go:82] duration metric: took 6.486836ms for pod "coredns-7c65d6cfc9-zh7sm" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.617312  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.617370  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438
	I0916 18:12:59.617381  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.617390  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.617399  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.619864  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.620558  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:12:59.620570  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.620577  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.620583  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.622783  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.623203  392787 pod_ready.go:93] pod "etcd-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.623224  392787 pod_ready.go:82] duration metric: took 5.904153ms for pod "etcd-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.623245  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.623309  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m02
	I0916 18:12:59.623318  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.623324  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.623328  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.625871  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.626349  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:12:59.626363  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.626369  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.626374  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.628395  392787 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 18:12:59.628890  392787 pod_ready.go:93] pod "etcd-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.628908  392787 pod_ready.go:82] duration metric: took 5.653837ms for pod "etcd-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.628927  392787 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.783340  392787 request.go:632] Waited for 154.329904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m03
	I0916 18:12:59.783420  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/etcd-ha-365438-m03
	I0916 18:12:59.783428  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.783467  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.783478  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.787297  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.983422  392787 request.go:632] Waited for 195.400533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.983530  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:12:59.983547  392787 round_trippers.go:469] Request Headers:
	I0916 18:12:59.983559  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:12:59.983590  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:12:59.986759  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:12:59.987515  392787 pod_ready.go:93] pod "etcd-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:12:59.987534  392787 pod_ready.go:82] duration metric: took 358.598974ms for pod "etcd-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:12:59.987549  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.182889  392787 request.go:632] Waited for 195.23344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:13:00.182952  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438
	I0916 18:13:00.182957  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.182964  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.182968  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.186893  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:00.383215  392787 request.go:632] Waited for 195.432707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:00.383276  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:00.383281  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.383289  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.383292  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.386737  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:00.387448  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:00.387468  392787 pod_ready.go:82] duration metric: took 399.91301ms for pod "kube-apiserver-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.387478  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.583590  392787 request.go:632] Waited for 196.029732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:13:00.583676  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m02
	I0916 18:13:00.583683  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.583694  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.583704  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.587274  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:00.782762  392787 request.go:632] Waited for 194.162407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:00.782860  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:00.782871  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.782883  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.782891  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.792088  392787 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0916 18:13:00.792847  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:00.792883  392787 pod_ready.go:82] duration metric: took 405.39653ms for pod "kube-apiserver-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.792896  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:00.983091  392787 request.go:632] Waited for 190.084396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m03
	I0916 18:13:00.983174  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-365438-m03
	I0916 18:13:00.983181  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:00.983189  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:00.983196  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:00.987131  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.183425  392787 request.go:632] Waited for 195.419999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:01.183487  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:01.183492  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.183499  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.183502  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.188515  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:01.189086  392787 pod_ready.go:93] pod "kube-apiserver-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:01.189113  392787 pod_ready.go:82] duration metric: took 396.209012ms for pod "kube-apiserver-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.189129  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.383062  392787 request.go:632] Waited for 193.84647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:13:01.383169  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438
	I0916 18:13:01.383179  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.383187  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.383191  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.386966  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.582997  392787 request.go:632] Waited for 195.374257ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:01.583079  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:01.583088  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.583100  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.583109  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.587144  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:01.587995  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:01.588022  392787 pod_ready.go:82] duration metric: took 398.882515ms for pod "kube-controller-manager-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.588035  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.783048  392787 request.go:632] Waited for 194.906609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:13:01.783143  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m02
	I0916 18:13:01.783150  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.783158  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.783168  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.786633  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.982698  392787 request.go:632] Waited for 194.578986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:01.982779  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:01.982791  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:01.982801  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:01.982808  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:01.986249  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:01.986974  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:01.986999  392787 pod_ready.go:82] duration metric: took 398.955367ms for pod "kube-controller-manager-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:01.987013  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.183061  392787 request.go:632] Waited for 195.922884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m03
	I0916 18:13:02.183155  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-365438-m03
	I0916 18:13:02.183167  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.183180  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.183189  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.187631  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:02.383575  392787 request.go:632] Waited for 195.153908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:02.383651  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:02.383657  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.383666  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.383670  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.387023  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:02.387737  392787 pod_ready.go:93] pod "kube-controller-manager-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:02.387762  392787 pod_ready.go:82] duration metric: took 400.741572ms for pod "kube-controller-manager-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.387772  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.582850  392787 request.go:632] Waited for 194.977586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:13:02.582926  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4rfbj
	I0916 18:13:02.582935  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.582946  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.582956  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.586646  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:02.783544  392787 request.go:632] Waited for 196.229158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:02.783626  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:02.783631  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.783639  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.783643  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.787351  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:02.787936  392787 pod_ready.go:93] pod "kube-proxy-4rfbj" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:02.787957  392787 pod_ready.go:82] duration metric: took 400.175389ms for pod "kube-proxy-4rfbj" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.787967  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mjljp" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:02.982749  392787 request.go:632] Waited for 194.672685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjljp
	I0916 18:13:02.982827  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjljp
	I0916 18:13:02.982835  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:02.982843  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:02.982849  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:02.986721  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.182765  392787 request.go:632] Waited for 195.290403ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:03.182853  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:03.182859  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.182868  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.182871  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.187284  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:03.188075  392787 pod_ready.go:93] pod "kube-proxy-mjljp" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:03.188101  392787 pod_ready.go:82] duration metric: took 400.127597ms for pod "kube-proxy-mjljp" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.188115  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.383188  392787 request.go:632] Waited for 194.985677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:13:03.383283  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nrqvf
	I0916 18:13:03.383294  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.383305  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.383311  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.387031  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.583305  392787 request.go:632] Waited for 195.368535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:03.583374  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:03.583382  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.583392  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.583399  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.587275  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.587830  392787 pod_ready.go:93] pod "kube-proxy-nrqvf" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:03.587854  392787 pod_ready.go:82] duration metric: took 399.726525ms for pod "kube-proxy-nrqvf" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.587866  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.782909  392787 request.go:632] Waited for 194.941802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:13:03.782977  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438
	I0916 18:13:03.782984  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.782994  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.783000  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.786673  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.982600  392787 request.go:632] Waited for 195.308926ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:03.982664  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438
	I0916 18:13:03.982669  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:03.982676  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:03.982681  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:03.985742  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:03.986404  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:03.986422  392787 pod_ready.go:82] duration metric: took 398.54947ms for pod "kube-scheduler-ha-365438" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:03.986432  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.183531  392787 request.go:632] Waited for 197.004679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:13:04.183623  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m02
	I0916 18:13:04.183634  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.183646  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.183656  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.188127  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:04.383008  392787 request.go:632] Waited for 194.245966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:04.383084  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m02
	I0916 18:13:04.383091  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.383101  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.383115  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.390859  392787 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 18:13:04.391350  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:04.391370  392787 pod_ready.go:82] duration metric: took 404.930794ms for pod "kube-scheduler-ha-365438-m02" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.391379  392787 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.583595  392787 request.go:632] Waited for 192.100389ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m03
	I0916 18:13:04.583657  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-365438-m03
	I0916 18:13:04.583663  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.583671  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.583675  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.587702  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:04.783641  392787 request.go:632] Waited for 195.346085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:04.783704  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes/ha-365438-m03
	I0916 18:13:04.783712  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.783722  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.783731  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.787824  392787 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 18:13:04.788304  392787 pod_ready.go:93] pod "kube-scheduler-ha-365438-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 18:13:04.788324  392787 pod_ready.go:82] duration metric: took 396.938315ms for pod "kube-scheduler-ha-365438-m03" in "kube-system" namespace to be "Ready" ...
	I0916 18:13:04.788335  392787 pod_ready.go:39] duration metric: took 5.201535788s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 18:13:04.788352  392787 api_server.go:52] waiting for apiserver process to appear ...
	I0916 18:13:04.788407  392787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:13:04.804417  392787 api_server.go:72] duration metric: took 23.063094336s to wait for apiserver process to appear ...
	I0916 18:13:04.804447  392787 api_server.go:88] waiting for apiserver healthz status ...
	I0916 18:13:04.804469  392787 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0916 18:13:04.809550  392787 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0916 18:13:04.809652  392787 round_trippers.go:463] GET https://192.168.39.165:8443/version
	I0916 18:13:04.809661  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.809670  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.809678  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.810883  392787 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 18:13:04.810953  392787 api_server.go:141] control plane version: v1.31.1
	I0916 18:13:04.810969  392787 api_server.go:131] duration metric: took 6.515714ms to wait for apiserver health ...
	I0916 18:13:04.810977  392787 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 18:13:04.983408  392787 request.go:632] Waited for 172.33212ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:04.983479  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:04.983486  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:04.983497  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:04.983507  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:04.990262  392787 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 18:13:04.997971  392787 system_pods.go:59] 24 kube-system pods found
	I0916 18:13:04.998002  392787 system_pods.go:61] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:13:04.998007  392787 system_pods.go:61] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:13:04.998012  392787 system_pods.go:61] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:13:04.998015  392787 system_pods.go:61] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:13:04.998019  392787 system_pods.go:61] "etcd-ha-365438-m03" [45ddb461-9dd3-427f-a452-5877e0d64c70] Running
	I0916 18:13:04.998022  392787 system_pods.go:61] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:13:04.998025  392787 system_pods.go:61] "kindnet-99gkn" [10d5b9d6-42b5-4e43-9338-9af09c16e31d] Running
	I0916 18:13:04.998028  392787 system_pods.go:61] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:13:04.998032  392787 system_pods.go:61] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:13:04.998035  392787 system_pods.go:61] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:13:04.998038  392787 system_pods.go:61] "kube-apiserver-ha-365438-m03" [d0d96b4f-e681-41c0-9880-1b08a79dae8b] Running
	I0916 18:13:04.998041  392787 system_pods.go:61] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:13:04.998045  392787 system_pods.go:61] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:13:04.998051  392787 system_pods.go:61] "kube-controller-manager-ha-365438-m03" [d66ec66c-bcb2-406c-bce2-b9fa2e926a94] Running
	I0916 18:13:04.998056  392787 system_pods.go:61] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:13:04.998062  392787 system_pods.go:61] "kube-proxy-mjljp" [796ffc54-f5ab-4475-a94b-f1b5c0e3b016] Running
	I0916 18:13:04.998067  392787 system_pods.go:61] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:13:04.998072  392787 system_pods.go:61] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:13:04.998080  392787 system_pods.go:61] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:13:04.998085  392787 system_pods.go:61] "kube-scheduler-ha-365438-m03" [3128b7cd-6481-4cf0-90bd-848a297928ae] Running
	I0916 18:13:04.998088  392787 system_pods.go:61] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:13:04.998091  392787 system_pods.go:61] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:13:04.998094  392787 system_pods.go:61] "kube-vip-ha-365438-m03" [a9526f41-9953-4e9a-848b-ffe4f138550b] Running
	I0916 18:13:04.998097  392787 system_pods.go:61] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:13:04.998103  392787 system_pods.go:74] duration metric: took 187.120656ms to wait for pod list to return data ...
	I0916 18:13:04.998115  392787 default_sa.go:34] waiting for default service account to be created ...
	I0916 18:13:05.183572  392787 request.go:632] Waited for 185.361206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:13:05.183653  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/default/serviceaccounts
	I0916 18:13:05.183664  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:05.183674  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:05.183684  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:05.189857  392787 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 18:13:05.190030  392787 default_sa.go:45] found service account: "default"
	I0916 18:13:05.190056  392787 default_sa.go:55] duration metric: took 191.933191ms for default service account to be created ...
	I0916 18:13:05.190067  392787 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 18:13:05.383549  392787 request.go:632] Waited for 193.39071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:05.383624  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/namespaces/kube-system/pods
	I0916 18:13:05.383631  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:05.383641  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:05.383652  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:05.389473  392787 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 18:13:05.396913  392787 system_pods.go:86] 24 kube-system pods found
	I0916 18:13:05.396965  392787 system_pods.go:89] "coredns-7c65d6cfc9-9svk8" [d217bdc6-679b-4142-8b23-6b42ce62bed7] Running
	I0916 18:13:05.396972  392787 system_pods.go:89] "coredns-7c65d6cfc9-zh7sm" [a06bf623-3365-4a96-9920-1732dbccb11e] Running
	I0916 18:13:05.396976  392787 system_pods.go:89] "etcd-ha-365438" [dd53da56-1b22-496c-b43d-700d5d16c281] Running
	I0916 18:13:05.396981  392787 system_pods.go:89] "etcd-ha-365438-m02" [3c70e871-9070-4a4a-98fa-755343b9406c] Running
	I0916 18:13:05.396984  392787 system_pods.go:89] "etcd-ha-365438-m03" [45ddb461-9dd3-427f-a452-5877e0d64c70] Running
	I0916 18:13:05.396988  392787 system_pods.go:89] "kindnet-599gk" [707eec6e-e38e-440a-8c26-67e1cd5fb644] Running
	I0916 18:13:05.396991  392787 system_pods.go:89] "kindnet-99gkn" [10d5b9d6-42b5-4e43-9338-9af09c16e31d] Running
	I0916 18:13:05.396995  392787 system_pods.go:89] "kindnet-q2vlq" [9945ea84-a699-4b83-82b7-217353297303] Running
	I0916 18:13:05.396999  392787 system_pods.go:89] "kube-apiserver-ha-365438" [8cdd6932-ebe6-44ba-a53d-6ef9fbc85bc6] Running
	I0916 18:13:05.397003  392787 system_pods.go:89] "kube-apiserver-ha-365438-m02" [6a75275a-810b-46c8-b91c-a1f0a0b9117b] Running
	I0916 18:13:05.397007  392787 system_pods.go:89] "kube-apiserver-ha-365438-m03" [d0d96b4f-e681-41c0-9880-1b08a79dae8b] Running
	I0916 18:13:05.397010  392787 system_pods.go:89] "kube-controller-manager-ha-365438" [f0ff96ae-8e9f-4c15-b9ce-974dd5a06986] Running
	I0916 18:13:05.397014  392787 system_pods.go:89] "kube-controller-manager-ha-365438-m02" [04c11f56-b241-4076-894d-37d51b64eba1] Running
	I0916 18:13:05.397020  392787 system_pods.go:89] "kube-controller-manager-ha-365438-m03" [d66ec66c-bcb2-406c-bce2-b9fa2e926a94] Running
	I0916 18:13:05.397027  392787 system_pods.go:89] "kube-proxy-4rfbj" [fe239922-db36-477f-9fe5-9635b598aae1] Running
	I0916 18:13:05.397031  392787 system_pods.go:89] "kube-proxy-mjljp" [796ffc54-f5ab-4475-a94b-f1b5c0e3b016] Running
	I0916 18:13:05.397037  392787 system_pods.go:89] "kube-proxy-nrqvf" [899abaca-8e00-43f8-8fac-9a62e385988d] Running
	I0916 18:13:05.397041  392787 system_pods.go:89] "kube-scheduler-ha-365438" [8584b531-084b-4462-9a76-925d65faee42] Running
	I0916 18:13:05.397047  392787 system_pods.go:89] "kube-scheduler-ha-365438-m02" [82718288-3ca7-441d-a89f-4109ad38790d] Running
	I0916 18:13:05.397051  392787 system_pods.go:89] "kube-scheduler-ha-365438-m03" [3128b7cd-6481-4cf0-90bd-848a297928ae] Running
	I0916 18:13:05.397057  392787 system_pods.go:89] "kube-vip-ha-365438" [f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4] Running
	I0916 18:13:05.397060  392787 system_pods.go:89] "kube-vip-ha-365438-m02" [c0226ba7-6844-45f0-8536-c61d967e71b7] Running
	I0916 18:13:05.397066  392787 system_pods.go:89] "kube-vip-ha-365438-m03" [a9526f41-9953-4e9a-848b-ffe4f138550b] Running
	I0916 18:13:05.397069  392787 system_pods.go:89] "storage-provisioner" [4e028ac1-4385-4d75-a80c-022a5bd90494] Running
	I0916 18:13:05.397077  392787 system_pods.go:126] duration metric: took 207.003058ms to wait for k8s-apps to be running ...
	I0916 18:13:05.397086  392787 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 18:13:05.397134  392787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:13:05.413583  392787 system_svc.go:56] duration metric: took 16.48209ms WaitForService to wait for kubelet
	I0916 18:13:05.413618  392787 kubeadm.go:582] duration metric: took 23.672302076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:13:05.413636  392787 node_conditions.go:102] verifying NodePressure condition ...
	I0916 18:13:05.583122  392787 request.go:632] Waited for 169.380554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.165:8443/api/v1/nodes
	I0916 18:13:05.583181  392787 round_trippers.go:463] GET https://192.168.39.165:8443/api/v1/nodes
	I0916 18:13:05.583186  392787 round_trippers.go:469] Request Headers:
	I0916 18:13:05.583193  392787 round_trippers.go:473]     Accept: application/json, */*
	I0916 18:13:05.583205  392787 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 18:13:05.587044  392787 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 18:13:05.587993  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:13:05.588017  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:13:05.588032  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:13:05.588037  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:13:05.588042  392787 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0916 18:13:05.588046  392787 node_conditions.go:123] node cpu capacity is 2
	I0916 18:13:05.588052  392787 node_conditions.go:105] duration metric: took 174.411005ms to run NodePressure ...
	I0916 18:13:05.588067  392787 start.go:241] waiting for startup goroutines ...
	I0916 18:13:05.588095  392787 start.go:255] writing updated cluster config ...
	I0916 18:13:05.588448  392787 ssh_runner.go:195] Run: rm -f paused
	I0916 18:13:05.639929  392787 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0916 18:13:05.642140  392787 out.go:177] * Done! kubectl is now configured to use "ha-365438" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.938069231Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-8lxm5,Uid:65d5a00f-1f34-4797-af18-9e71ca834a79,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510387818400543,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:13:06.606552429Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4e028ac1-4385-4d75-a80c-022a5bd90494,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1726510244653796115,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T18:10:44.327644086Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-9svk8,Uid:d217bdc6-679b-4142-8b23-6b42ce62bed7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510244651182513,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:10:44.327412212Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-zh7sm,Uid:a06bf623-3365-4a96-9920-1732dbccb11e,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1726510244625563950,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:10:44.318362739Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&PodSandboxMetadata{Name:kindnet-599gk,Uid:707eec6e-e38e-440a-8c26-67e1cd5fb644,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510232158737096,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:10:31.846844512Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&PodSandboxMetadata{Name:kube-proxy-4rfbj,Uid:fe239922-db36-477f-9fe5-9635b598aae1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510232142319354,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:10:31.834083427Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-365438,Uid:1bd9095c288417d9c952dbb6f3027e0c,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1726510220758124788,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{kubernetes.io/config.hash: 1bd9095c288417d9c952dbb6f3027e0c,kubernetes.io/config.seen: 2024-09-16T18:10:20.278163861Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-365438,Uid:5eba74aa50b0a68dd2cab9f3e21a77d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510220755223980,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: 5eba74aa50b0a68dd2cab9f3e21a77d6,kubernetes.io/config.seen: 2024-09-16T18:10:20.278162050Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&PodSandboxMetadata{Name:etcd-ha-365438,Uid:f73a2686ca3c9ae2e5b8e38bca6a1d1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510220742783808,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.165:2379,kubernetes.io/config.hash: f73a2686ca3c9ae2e5b8e38bca6a1d1c,kubernetes.io/config.seen: 2024-09-16T18:10:20.278156969Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&Pod
SandboxMetadata{Name:kube-scheduler-ha-365438,Uid:bb1947c92a1198b7f2706653997a7278,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510220730547482,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb1947c92a1198b7f2706653997a7278,kubernetes.io/config.seen: 2024-09-16T18:10:20.278163068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-365438,Uid:c7b0ab34f4aee20f06faf7609d3e1205,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726510220727670672,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.165:8443,kubernetes.io/config.hash: c7b0ab34f4aee20f06faf7609d3e1205,kubernetes.io/config.seen: 2024-09-16T18:10:20.278160833Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=15d3b48d-b95a-46bb-9d29-ee254fef9385 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.938803323Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b723d4b-a81b-4b80-b62c-c840848773a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.938938959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b723d4b-a81b-4b80-b62c-c840848773a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.939232382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b723d4b-a81b-4b80-b62c-c840848773a4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.955742330Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09b0362a-0526-4477-b79c-a67fc1188716 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.955841760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09b0362a-0526-4477-b79c-a67fc1188716 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.956917167Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0451c1a8-a28d-4481-ae0a-696c8ec73469 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.957334532Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510656957313091,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0451c1a8-a28d-4481-ae0a-696c8ec73469 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.957942268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50ebf984-be34-42a9-ba06-146163c2ed17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.958020805Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50ebf984-be34-42a9-ba06-146163c2ed17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:36 ha-365438 crio[665]: time="2024-09-16 18:17:36.958263376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50ebf984-be34-42a9-ba06-146163c2ed17 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.002006323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e2bbd62-34d9-4e8e-993e-61052a9552df name=/runtime.v1.RuntimeService/Version
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.002082766Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e2bbd62-34d9-4e8e-993e-61052a9552df name=/runtime.v1.RuntimeService/Version
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.003416599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=481b8c72-0d10-409a-b20d-733d56749054 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.003905698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510657003883968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=481b8c72-0d10-409a-b20d-733d56749054 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.004767893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce8c0938-4225-4f7c-9b61-6688aa3b6aee name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.004821930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce8c0938-4225-4f7c-9b61-6688aa3b6aee name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.005281094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce8c0938-4225-4f7c-9b61-6688aa3b6aee name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.053549796Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=858d71b8-149f-4de5-b143-62548813d9d4 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.053632042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=858d71b8-149f-4de5-b143-62548813d9d4 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.054530919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69690219-d4cf-4df5-a631-50bff6b6499b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.055105625Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510657055080696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69690219-d4cf-4df5-a631-50bff6b6499b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.056012493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa498f45-808a-4766-bf54-874bd2fcdae1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.056080328Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa498f45-808a-4766-bf54-874bd2fcdae1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:17:37 ha-365438 crio[665]: time="2024-09-16 18:17:37.056297225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510390752076476,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d,PodSandboxId:fe46e69c89ef4a2d9e1e7787198e86741cab3cd6cec4f15c302692fff4611d92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244944427572,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d39f7ccc716d1bbe4a8e70d241c5eea171e9c5637f11bc659a65ea0a3b67016,PodSandboxId:ea14225ed4b22c11f05f0117c2026dddee33b5b05ef32e7257a77ff4f61c1561,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510244926782227,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61,PodSandboxId:f9e31847522a437e7ac4fbc7bcf178c9057dc324433808813217140c2816320f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510244845260706,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-33
65-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17265102
32596868635,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510232322859466,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803,PodSandboxId:1832b99e80b46e73212cdb11a7d6e62421646c48efb5af2b6c0cffba55eb7261,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510224182241691,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1bd9095c288417d9c952dbb6f3027e0c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510221036880445,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486,PodSandboxId:96b362b092e850355972e5bcada4184f2daa0b0be993ca1e9a09314866ba5c19,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510221000983994,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510220989336285,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6,PodSandboxId:391ae22fdb2eeb09d3a9a41ff573d044c6012beed23c1ac57f4625dabc5c994f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510220897947709,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa498f45-808a-4766-bf54-874bd2fcdae1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1c688c47b509b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   45427fea44b56       busybox-7dff88458-8lxm5
	637415283f8f3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   fe46e69c89ef4       coredns-7c65d6cfc9-9svk8
	6d39f7ccc716d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   ea14225ed4b22       storage-provisioner
	cc48bfbff79f1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   f9e31847522a4       coredns-7c65d6cfc9-zh7sm
	ae842d37f79ef       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      7 minutes ago       Running             kindnet-cni               0                   16b1b97f4eee2       kindnet-599gk
	fced6ce81805e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      7 minutes ago       Running             kube-proxy                0                   c7bb352443d32       kube-proxy-4rfbj
	bdc152e65d13d       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   1832b99e80b46       kube-vip-ha-365438
	4afcf5ad24d43       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      7 minutes ago       Running             kube-scheduler            0                   4415d47ee85c8       kube-scheduler-ha-365438
	c88b73102e4d2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      7 minutes ago       Running             kube-controller-manager   0                   96b362b092e85       kube-controller-manager-ha-365438
	ee90a7de312ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   265048ac4715e       etcd-ha-365438
	36d26d8df5e6b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      7 minutes ago       Running             kube-apiserver            0                   391ae22fdb2ee       kube-apiserver-ha-365438
	
	
	==> coredns [637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d] <==
	[INFO] 10.244.0.4:55046 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001617632s
	[INFO] 10.244.1.2:47379 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000105448s
	[INFO] 10.244.2.2:51760 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003638819s
	[INFO] 10.244.2.2:46488 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003037697s
	[INFO] 10.244.2.2:44401 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139379s
	[INFO] 10.244.2.2:56173 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112442s
	[INFO] 10.244.0.4:32857 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002069592s
	[INFO] 10.244.0.4:35029 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155002s
	[INFO] 10.244.0.4:49666 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167499s
	[INFO] 10.244.0.4:41304 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000152323s
	[INFO] 10.244.0.4:41961 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100019s
	[INFO] 10.244.1.2:48555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001727554s
	[INFO] 10.244.1.2:37688 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000206956s
	[INFO] 10.244.1.2:44275 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000110571s
	[INFO] 10.244.1.2:37001 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093275s
	[INFO] 10.244.1.2:57811 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112973s
	[INFO] 10.244.2.2:55064 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000213698s
	[INFO] 10.244.0.4:37672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109862s
	[INFO] 10.244.0.4:45703 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118782s
	[INFO] 10.244.1.2:52420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000150103s
	[INFO] 10.244.2.2:52865 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149526s
	[INFO] 10.244.0.4:44130 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119328s
	[INFO] 10.244.0.4:51235 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014834s
	[INFO] 10.244.1.2:43653 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000142634s
	[INFO] 10.244.1.2:57111 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010624s
	
	
	==> coredns [cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61] <==
	[INFO] 10.244.2.2:53710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142911s
	[INFO] 10.244.2.2:58300 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000214269s
	[INFO] 10.244.2.2:58024 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167123s
	[INFO] 10.244.2.2:45004 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000159707s
	[INFO] 10.244.0.4:52424 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109638s
	[INFO] 10.244.0.4:57524 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000203423s
	[INFO] 10.244.0.4:54948 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001771392s
	[INFO] 10.244.1.2:42603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150119s
	[INFO] 10.244.1.2:33836 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00012192s
	[INFO] 10.244.1.2:43769 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001330856s
	[INFO] 10.244.2.2:36423 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000180227s
	[INFO] 10.244.2.2:37438 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000200625s
	[INFO] 10.244.2.2:51918 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177109s
	[INFO] 10.244.0.4:40286 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009687s
	[INFO] 10.244.0.4:48298 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090197s
	[INFO] 10.244.1.2:55488 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228564s
	[INFO] 10.244.1.2:56818 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000347056s
	[INFO] 10.244.1.2:48235 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000199345s
	[INFO] 10.244.2.2:47702 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156299s
	[INFO] 10.244.2.2:56845 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193247s
	[INFO] 10.244.2.2:51347 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151041s
	[INFO] 10.244.0.4:52543 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000181864s
	[INFO] 10.244.0.4:60962 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097958s
	[INFO] 10.244.1.2:48543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000201712s
	[INFO] 10.244.1.2:47958 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011421s
	
	
	==> describe nodes <==
	Name:               ha-365438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T18_10_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:10:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:17:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:13:31 +0000   Mon, 16 Sep 2024 18:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-365438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 428a6b3869674553b5fa368f548d44fe
	  System UUID:                428a6b38-6967-4553-b5fa-368f548d44fe
	  Boot ID:                    bf6a145c-4c83-434e-832f-5377ceb5d93e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8lxm5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-7c65d6cfc9-9svk8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 coredns-7c65d6cfc9-zh7sm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m6s
	  kube-system                 etcd-ha-365438                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m10s
	  kube-system                 kindnet-599gk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m6s
	  kube-system                 kube-apiserver-ha-365438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 kube-controller-manager-ha-365438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-proxy-4rfbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 kube-scheduler-ha-365438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-vip-ha-365438                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m17s (x7 over 7m17s)  kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m17s (x8 over 7m17s)  kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s (x8 over 7m17s)  kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s                  kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s                  kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s                  kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal  NodeReady                6m53s                  kubelet          Node ha-365438 status is now: NodeReady
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	
	
	Name:               ha-365438-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_11_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:11:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:14:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 18:13:24 +0000   Mon, 16 Sep 2024 18:15:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-365438-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37dacc83603e40abb19ac133e9d2c030
	  System UUID:                37dacc83-603e-40ab-b19a-c133e9d2c030
	  Boot ID:                    5550a2cd-442d-4fc2-aaf0-b6d4f273236b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8whmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-ha-365438-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m13s
	  kube-system                 kindnet-q2vlq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-365438-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-controller-manager-ha-365438-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m5s
	  kube-system                 kube-proxy-nrqvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-365438-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-vip-ha-365438-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s (x8 over 6m15s)  kubelet          Node ha-365438-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  NodeNotReady             2m31s                  node-controller  Node ha-365438-m02 status is now: NodeNotReady
	
	
	Name:               ha-365438-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_12_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:12:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:17:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:13:39 +0000   Mon, 16 Sep 2024 18:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-365438-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 113546b28b1a45aca3d715558877ace5
	  System UUID:                113546b2-8b1a-45ac-a3d7-15558877ace5
	  Boot ID:                    57192e71-e2a9-47b0-8ee4-d31dbab88507
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4hs24                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 etcd-ha-365438-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m57s
	  kube-system                 kindnet-99gkn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m59s
	  kube-system                 kube-apiserver-ha-365438-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-ha-365438-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-proxy-mjljp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-scheduler-ha-365438-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-vip-ha-365438-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node ha-365438-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	
	
	Name:               ha-365438-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_13_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:17:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:13:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:13:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:13:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:14:15 +0000   Mon, 16 Sep 2024 18:14:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-365438-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a60d15c35e49c89cf5c86d6e9e7127
	  System UUID:                19a60d15-c35e-49c8-9cf5-c86d6e9e7127
	  Boot ID:                    53999728-4b75-46ca-92fe-01082b4d22f7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gjxct       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m53s
	  kube-system                 kube-proxy-pln82    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m53s (x2 over 3m53s)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x2 over 3m53s)  kubelet          Node ha-365438-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x2 over 3m53s)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal  NodeReady                3m32s                  kubelet          Node ha-365438-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 18:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050391] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.824360] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556090] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Sep16 18:10] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.575440] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073590] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.211872] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.138357] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.296917] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.171159] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.216894] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.069713] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.331842] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.083288] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.257891] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.535666] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 18:11] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce] <==
	{"level":"warn","ts":"2024-09-16T18:17:37.317176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.344816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.351104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.361349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.367666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.381207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.389422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.397723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.402588Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.406770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.415743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.423128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.431037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.438341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.442622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.445548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.449111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.455779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.463208Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.466837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.470232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.473888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.480190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.486339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:17:37.545311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:17:37 up 7 min,  0 users,  load average: 0.28, 0.24, 0.13
	Linux ha-365438 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d] <==
	I0916 18:17:03.858777       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:17:13.858632       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:17:13.858773       1 main.go:299] handling current node
	I0916 18:17:13.858874       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:17:13.858898       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:17:13.859038       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:17:13.859060       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:17:13.859125       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:17:13.859144       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:17:23.857034       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:17:23.857102       1 main.go:299] handling current node
	I0916 18:17:23.857144       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:17:23.857152       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:17:23.857444       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:17:23.857512       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:17:23.857570       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:17:23.857575       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:17:33.848742       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:17:33.848795       1 main.go:299] handling current node
	I0916 18:17:33.848827       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:17:33.848833       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:17:33.848978       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:17:33.849003       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:17:33.849067       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:17:33.849086       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6] <==
	I0916 18:10:27.244394       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 18:10:27.265179       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 18:10:27.280369       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 18:10:31.798242       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 18:10:31.864144       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0916 18:12:38.928935       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0916 18:12:38.929374       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 12.563µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0916 18:12:38.930789       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0916 18:12:38.932083       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0916 18:12:38.933386       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.58541ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E0916 18:13:11.410930       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54582: use of closed network connection
	E0916 18:13:11.619705       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54610: use of closed network connection
	E0916 18:13:11.816958       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54624: use of closed network connection
	E0916 18:13:12.024707       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54642: use of closed network connection
	E0916 18:13:12.233572       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54662: use of closed network connection
	E0916 18:13:12.435652       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54686: use of closed network connection
	E0916 18:13:12.626780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	E0916 18:13:12.810687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54738: use of closed network connection
	E0916 18:13:12.995777       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54752: use of closed network connection
	E0916 18:13:13.307726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0916 18:13:13.490522       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54826: use of closed network connection
	E0916 18:13:13.683198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54842: use of closed network connection
	E0916 18:13:13.864048       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54866: use of closed network connection
	E0916 18:13:14.060925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54886: use of closed network connection
	E0916 18:13:14.258968       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54898: use of closed network connection
	
	
	==> kube-controller-manager [c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486] <==
	I0916 18:13:44.924449       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-365438-m04" podCIDRs=["10.244.3.0/24"]
	I0916 18:13:44.924624       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:44.927263       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:44.949065       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:45.139259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:45.548969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:45.871445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.024001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.024282       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-365438-m04"
	I0916 18:13:46.052316       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.394397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:46.424081       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:13:55.156809       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:05.787818       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	I0916 18:14:05.788815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:05.809341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:06.044684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:14:15.558924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:15:06.071107       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	I0916 18:15:06.071770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:15:06.095977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:15:06.241653       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="90.877135ms"
	I0916 18:15:06.241802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="93.468µs"
	I0916 18:15:06.443579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:15:11.332301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	
	
	==> kube-proxy [fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:10:32.987622       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 18:10:33.018026       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E0916 18:10:33.018217       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:10:33.098891       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:10:33.098934       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:10:33.098958       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:10:33.106834       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:10:33.107514       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:10:33.107530       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:10:33.111336       1 config.go:199] "Starting service config controller"
	I0916 18:10:33.113381       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:10:33.114077       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:10:33.115158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:10:33.118847       1 config.go:328] "Starting node config controller"
	I0916 18:10:33.118884       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:10:33.218918       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:10:33.218989       1 shared_informer.go:320] Caches are synced for node config
	I0916 18:10:33.219044       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804] <==
	E0916 18:10:25.314834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.381719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 18:10:25.381817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.390706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 18:10:25.390815       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.432183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 18:10:25.432289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.436726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 18:10:25.436823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:10:25.529314       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 18:10:25.529380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 18:10:27.404687       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 18:12:38.213573       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-99gkn\": pod kindnet-99gkn is already assigned to node \"ha-365438-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-99gkn" node="ha-365438-m03"
	E0916 18:12:38.216530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 10d5b9d6-42b5-4e43-9338-9af09c16e31d(kube-system/kindnet-99gkn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-99gkn"
	E0916 18:12:38.217004       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-99gkn\": pod kindnet-99gkn is already assigned to node \"ha-365438-m03\"" pod="kube-system/kindnet-99gkn"
	I0916 18:12:38.217215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-99gkn" node="ha-365438-m03"
	I0916 18:13:06.562653       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="f2ef0616-2379-49c3-af53-b3779fb4448f" pod="default/busybox-7dff88458-4hs24" assumedNode="ha-365438-m03" currentNode="ha-365438-m02"
	E0916 18:13:06.587442       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4hs24" node="ha-365438-m02"
	E0916 18:13:06.587523       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f2ef0616-2379-49c3-af53-b3779fb4448f(default/busybox-7dff88458-4hs24) was assumed on ha-365438-m02 but assigned to ha-365438-m03" pod="default/busybox-7dff88458-4hs24"
	E0916 18:13:06.587555       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" pod="default/busybox-7dff88458-4hs24"
	I0916 18:13:06.587578       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4hs24" node="ha-365438-m03"
	E0916 18:13:06.618090       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	E0916 18:13:06.618528       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 11bd1f64-d695-4fc7-bec9-5694a7552fdf(default/busybox-7dff88458-8whmx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8whmx"
	E0916 18:13:06.618607       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" pod="default/busybox-7dff88458-8whmx"
	I0916 18:13:06.618663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	
	
	==> kubelet <==
	Sep 16 18:16:27 ha-365438 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:16:27 ha-365438 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:16:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:16:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:16:27 ha-365438 kubelet[1307]: E0916 18:16:27.327272    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510587326206067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:27 ha-365438 kubelet[1307]: E0916 18:16:27.327317    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510587326206067,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:37 ha-365438 kubelet[1307]: E0916 18:16:37.329202    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510597328725522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:37 ha-365438 kubelet[1307]: E0916 18:16:37.329562    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510597328725522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:47 ha-365438 kubelet[1307]: E0916 18:16:47.331913    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510607331315869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:47 ha-365438 kubelet[1307]: E0916 18:16:47.331941    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510607331315869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:57 ha-365438 kubelet[1307]: E0916 18:16:57.335039    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510617333732701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:16:57 ha-365438 kubelet[1307]: E0916 18:16:57.335433    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510617333732701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:07 ha-365438 kubelet[1307]: E0916 18:17:07.338164    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510627337627074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:07 ha-365438 kubelet[1307]: E0916 18:17:07.338544    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510627337627074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:17 ha-365438 kubelet[1307]: E0916 18:17:17.340626    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510637340148300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:17 ha-365438 kubelet[1307]: E0916 18:17:17.340654    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510637340148300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:27 ha-365438 kubelet[1307]: E0916 18:17:27.252104    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:17:27 ha-365438 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:17:27 ha-365438 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:17:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:17:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:17:27 ha-365438 kubelet[1307]: E0916 18:17:27.343170    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510647342724028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:27 ha-365438 kubelet[1307]: E0916 18:17:27.343218    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510647342724028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:37 ha-365438 kubelet[1307]: E0916 18:17:37.345729    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510657345107826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:17:37 ha-365438 kubelet[1307]: E0916 18:17:37.345761    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510657345107826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-365438 -n ha-365438
helpers_test.go:261: (dbg) Run:  kubectl --context ha-365438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (46.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-365438 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-365438 -v=7 --alsologtostderr
E0916 18:18:56.984205  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:19:24.688459  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-365438 -v=7 --alsologtostderr: exit status 82 (2m1.905545784s)

                                                
                                                
-- stdout --
	* Stopping node "ha-365438-m04"  ...
	* Stopping node "ha-365438-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:17:39.005987  398509 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:17:39.006090  398509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:39.006098  398509 out.go:358] Setting ErrFile to fd 2...
	I0916 18:17:39.006102  398509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:17:39.006264  398509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:17:39.006484  398509 out.go:352] Setting JSON to false
	I0916 18:17:39.006568  398509 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:39.006945  398509 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:39.007031  398509 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:17:39.007203  398509 mustload.go:65] Loading cluster: ha-365438
	I0916 18:17:39.007347  398509 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:17:39.007371  398509 stop.go:39] StopHost: ha-365438-m04
	I0916 18:17:39.007715  398509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:39.007752  398509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:39.023841  398509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I0916 18:17:39.024261  398509 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:39.024862  398509 main.go:141] libmachine: Using API Version  1
	I0916 18:17:39.024887  398509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:39.025289  398509 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:39.027667  398509 out.go:177] * Stopping node "ha-365438-m04"  ...
	I0916 18:17:39.029213  398509 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 18:17:39.029253  398509 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:17:39.029543  398509 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 18:17:39.029587  398509 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:17:39.032708  398509 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:39.033174  398509 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:13:30 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:17:39.033211  398509 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:17:39.033310  398509 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:17:39.033476  398509 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:17:39.033599  398509 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:17:39.033770  398509 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:17:39.120440  398509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 18:17:39.174205  398509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 18:17:39.229493  398509 main.go:141] libmachine: Stopping "ha-365438-m04"...
	I0916 18:17:39.229546  398509 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:39.230950  398509 main.go:141] libmachine: (ha-365438-m04) Calling .Stop
	I0916 18:17:39.234276  398509 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 0/120
	I0916 18:17:40.431054  398509 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:17:40.432270  398509 main.go:141] libmachine: Machine "ha-365438-m04" was stopped.
	I0916 18:17:40.432287  398509 stop.go:75] duration metric: took 1.403081003s to stop
	I0916 18:17:40.432325  398509 stop.go:39] StopHost: ha-365438-m03
	I0916 18:17:40.432678  398509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:17:40.432721  398509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:17:40.447386  398509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40639
	I0916 18:17:40.447814  398509 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:17:40.448247  398509 main.go:141] libmachine: Using API Version  1
	I0916 18:17:40.448265  398509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:17:40.448584  398509 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:17:40.450780  398509 out.go:177] * Stopping node "ha-365438-m03"  ...
	I0916 18:17:40.451922  398509 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 18:17:40.451957  398509 main.go:141] libmachine: (ha-365438-m03) Calling .DriverName
	I0916 18:17:40.452218  398509 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 18:17:40.452248  398509 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHHostname
	I0916 18:17:40.454886  398509 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:40.455233  398509 main.go:141] libmachine: (ha-365438-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e5:94", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:12:03 +0000 UTC Type:0 Mac:52:54:00:ac:e5:94 Iaid: IPaddr:192.168.39.231 Prefix:24 Hostname:ha-365438-m03 Clientid:01:52:54:00:ac:e5:94}
	I0916 18:17:40.455277  398509 main.go:141] libmachine: (ha-365438-m03) DBG | domain ha-365438-m03 has defined IP address 192.168.39.231 and MAC address 52:54:00:ac:e5:94 in network mk-ha-365438
	I0916 18:17:40.455403  398509 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHPort
	I0916 18:17:40.455566  398509 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHKeyPath
	I0916 18:17:40.455715  398509 main.go:141] libmachine: (ha-365438-m03) Calling .GetSSHUsername
	I0916 18:17:40.455833  398509 sshutil.go:53] new ssh client: &{IP:192.168.39.231 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m03/id_rsa Username:docker}
	I0916 18:17:40.542529  398509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 18:17:40.599419  398509 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 18:17:40.654779  398509 main.go:141] libmachine: Stopping "ha-365438-m03"...
	I0916 18:17:40.654809  398509 main.go:141] libmachine: (ha-365438-m03) Calling .GetState
	I0916 18:17:40.656499  398509 main.go:141] libmachine: (ha-365438-m03) Calling .Stop
	I0916 18:17:40.660505  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 0/120
	I0916 18:17:41.661954  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 1/120
	I0916 18:17:42.663429  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 2/120
	I0916 18:17:43.664893  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 3/120
	I0916 18:17:44.666297  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 4/120
	I0916 18:17:45.668491  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 5/120
	I0916 18:17:46.669878  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 6/120
	I0916 18:17:47.671704  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 7/120
	I0916 18:17:48.673411  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 8/120
	I0916 18:17:49.674765  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 9/120
	I0916 18:17:50.677170  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 10/120
	I0916 18:17:51.678643  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 11/120
	I0916 18:17:52.680649  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 12/120
	I0916 18:17:53.682360  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 13/120
	I0916 18:17:54.683819  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 14/120
	I0916 18:17:55.685510  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 15/120
	I0916 18:17:56.687129  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 16/120
	I0916 18:17:57.688588  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 17/120
	I0916 18:17:58.690184  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 18/120
	I0916 18:17:59.691870  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 19/120
	I0916 18:18:00.693963  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 20/120
	I0916 18:18:01.695935  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 21/120
	I0916 18:18:02.697916  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 22/120
	I0916 18:18:03.699439  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 23/120
	I0916 18:18:04.700996  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 24/120
	I0916 18:18:05.702684  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 25/120
	I0916 18:18:06.704197  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 26/120
	I0916 18:18:07.705732  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 27/120
	I0916 18:18:08.707454  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 28/120
	I0916 18:18:09.708903  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 29/120
	I0916 18:18:10.710760  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 30/120
	I0916 18:18:11.712383  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 31/120
	I0916 18:18:12.713939  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 32/120
	I0916 18:18:13.715690  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 33/120
	I0916 18:18:14.717140  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 34/120
	I0916 18:18:15.719057  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 35/120
	I0916 18:18:16.720485  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 36/120
	I0916 18:18:17.722035  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 37/120
	I0916 18:18:18.723717  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 38/120
	I0916 18:18:19.725092  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 39/120
	I0916 18:18:20.727182  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 40/120
	I0916 18:18:21.728799  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 41/120
	I0916 18:18:22.730068  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 42/120
	I0916 18:18:23.731455  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 43/120
	I0916 18:18:24.732981  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 44/120
	I0916 18:18:25.734792  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 45/120
	I0916 18:18:26.736523  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 46/120
	I0916 18:18:27.737854  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 47/120
	I0916 18:18:28.739215  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 48/120
	I0916 18:18:29.740411  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 49/120
	I0916 18:18:30.742155  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 50/120
	I0916 18:18:31.743707  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 51/120
	I0916 18:18:32.745014  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 52/120
	I0916 18:18:33.746458  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 53/120
	I0916 18:18:34.747684  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 54/120
	I0916 18:18:35.749636  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 55/120
	I0916 18:18:36.750879  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 56/120
	I0916 18:18:37.752308  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 57/120
	I0916 18:18:38.753807  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 58/120
	I0916 18:18:39.755178  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 59/120
	I0916 18:18:40.756986  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 60/120
	I0916 18:18:41.758549  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 61/120
	I0916 18:18:42.760046  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 62/120
	I0916 18:18:43.761560  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 63/120
	I0916 18:18:44.763494  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 64/120
	I0916 18:18:45.765388  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 65/120
	I0916 18:18:46.766930  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 66/120
	I0916 18:18:47.768426  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 67/120
	I0916 18:18:48.769900  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 68/120
	I0916 18:18:49.771322  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 69/120
	I0916 18:18:50.773150  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 70/120
	I0916 18:18:51.774648  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 71/120
	I0916 18:18:52.775995  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 72/120
	I0916 18:18:53.777562  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 73/120
	I0916 18:18:54.779187  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 74/120
	I0916 18:18:55.781170  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 75/120
	I0916 18:18:56.782517  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 76/120
	I0916 18:18:57.784087  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 77/120
	I0916 18:18:58.785477  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 78/120
	I0916 18:18:59.787210  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 79/120
	I0916 18:19:00.789736  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 80/120
	I0916 18:19:01.791308  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 81/120
	I0916 18:19:02.792762  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 82/120
	I0916 18:19:03.794268  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 83/120
	I0916 18:19:04.795681  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 84/120
	I0916 18:19:05.797360  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 85/120
	I0916 18:19:06.798623  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 86/120
	I0916 18:19:07.800100  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 87/120
	I0916 18:19:08.801518  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 88/120
	I0916 18:19:09.802972  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 89/120
	I0916 18:19:10.804687  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 90/120
	I0916 18:19:11.806246  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 91/120
	I0916 18:19:12.807662  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 92/120
	I0916 18:19:13.809003  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 93/120
	I0916 18:19:14.810465  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 94/120
	I0916 18:19:15.812755  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 95/120
	I0916 18:19:16.814345  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 96/120
	I0916 18:19:17.815861  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 97/120
	I0916 18:19:18.817405  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 98/120
	I0916 18:19:19.819102  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 99/120
	I0916 18:19:20.821000  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 100/120
	I0916 18:19:21.822486  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 101/120
	I0916 18:19:22.824565  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 102/120
	I0916 18:19:23.826034  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 103/120
	I0916 18:19:24.827440  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 104/120
	I0916 18:19:25.829246  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 105/120
	I0916 18:19:26.830525  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 106/120
	I0916 18:19:27.831982  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 107/120
	I0916 18:19:28.833583  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 108/120
	I0916 18:19:29.835657  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 109/120
	I0916 18:19:30.837856  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 110/120
	I0916 18:19:31.839515  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 111/120
	I0916 18:19:32.840913  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 112/120
	I0916 18:19:33.842402  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 113/120
	I0916 18:19:34.844500  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 114/120
	I0916 18:19:35.846447  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 115/120
	I0916 18:19:36.847810  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 116/120
	I0916 18:19:37.849491  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 117/120
	I0916 18:19:38.851613  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 118/120
	I0916 18:19:39.853059  398509 main.go:141] libmachine: (ha-365438-m03) Waiting for machine to stop 119/120
	I0916 18:19:40.853941  398509 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 18:19:40.853994  398509 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0916 18:19:40.856255  398509 out.go:201] 
	W0916 18:19:40.857786  398509 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0916 18:19:40.857806  398509 out.go:270] * 
	* 
	W0916 18:19:40.860754  398509 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 18:19:40.862220  398509 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-365438 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-365438 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-365438 --wait=true -v=7 --alsologtostderr: (4m10.443517108s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-365438
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-365438 -n ha-365438
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-365438 logs -n 25: (2.057724545s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m04 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp testdata/cp-test.txt                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438:/home/docker/cp-test_ha-365438-m04_ha-365438.txt                       |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438 sudo cat                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438.txt                                 |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03:/home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m03 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-365438 node stop m02 -v=7                                                     | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-365438 node start m02 -v=7                                                    | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-365438 -v=7                                                           | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-365438 -v=7                                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-365438 --wait=true -v=7                                                    | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:19 UTC | 16 Sep 24 18:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-365438                                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:23 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 18:19:40
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 18:19:40.912099  399410 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:19:40.912414  399410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:19:40.912425  399410 out.go:358] Setting ErrFile to fd 2...
	I0916 18:19:40.912431  399410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:19:40.912605  399410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:19:40.913213  399410 out.go:352] Setting JSON to false
	I0916 18:19:40.914236  399410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7324,"bootTime":1726503457,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:19:40.914356  399410 start.go:139] virtualization: kvm guest
	I0916 18:19:40.916741  399410 out.go:177] * [ha-365438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:19:40.919463  399410 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:19:40.919503  399410 notify.go:220] Checking for updates...
	I0916 18:19:40.921607  399410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:19:40.922830  399410 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:19:40.924238  399410 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:19:40.925922  399410 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:19:40.927166  399410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:19:40.928911  399410 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:19:40.929136  399410 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:19:40.929824  399410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:19:40.929882  399410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:19:40.945726  399410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
	I0916 18:19:40.946243  399410 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:19:40.946931  399410 main.go:141] libmachine: Using API Version  1
	I0916 18:19:40.946952  399410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:19:40.947312  399410 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:19:40.947522  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:19:40.986345  399410 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 18:19:40.987846  399410 start.go:297] selected driver: kvm2
	I0916 18:19:40.987870  399410 start.go:901] validating driver "kvm2" against &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:19:40.988019  399410 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:19:40.988379  399410 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:19:40.988477  399410 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 18:19:41.004742  399410 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 18:19:41.005475  399410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:19:41.005540  399410 cni.go:84] Creating CNI manager for ""
	I0916 18:19:41.005623  399410 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 18:19:41.005694  399410 start.go:340] cluster config:
	{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:19:41.005821  399410 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:19:41.007899  399410 out.go:177] * Starting "ha-365438" primary control-plane node in "ha-365438" cluster
	I0916 18:19:41.009138  399410 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:19:41.009187  399410 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 18:19:41.009204  399410 cache.go:56] Caching tarball of preloaded images
	I0916 18:19:41.009310  399410 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:19:41.009322  399410 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:19:41.009455  399410 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:19:41.009666  399410 start.go:360] acquireMachinesLock for ha-365438: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:19:41.009711  399410 start.go:364] duration metric: took 23.815µs to acquireMachinesLock for "ha-365438"
	I0916 18:19:41.009725  399410 start.go:96] Skipping create...Using existing machine configuration
	I0916 18:19:41.009731  399410 fix.go:54] fixHost starting: 
	I0916 18:19:41.009987  399410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:19:41.010021  399410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:19:41.027086  399410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0916 18:19:41.027644  399410 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:19:41.028212  399410 main.go:141] libmachine: Using API Version  1
	I0916 18:19:41.028235  399410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:19:41.028631  399410 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:19:41.028851  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:19:41.029052  399410 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:19:41.030872  399410 fix.go:112] recreateIfNeeded on ha-365438: state=Running err=<nil>
	W0916 18:19:41.030900  399410 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 18:19:41.033154  399410 out.go:177] * Updating the running kvm2 "ha-365438" VM ...
	I0916 18:19:41.034648  399410 machine.go:93] provisionDockerMachine start ...
	I0916 18:19:41.034683  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:19:41.034992  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.038047  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.038602  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.038629  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.038756  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.038942  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.039067  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.039250  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.039435  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.039626  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.039639  399410 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 18:19:41.158482  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438
	
	I0916 18:19:41.158515  399410 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:19:41.158796  399410 buildroot.go:166] provisioning hostname "ha-365438"
	I0916 18:19:41.158830  399410 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:19:41.159043  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.161940  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.162335  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.162357  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.162571  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.162771  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.162913  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.163044  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.163187  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.163384  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.163396  399410 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438 && echo "ha-365438" | sudo tee /etc/hostname
	I0916 18:19:41.297073  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438
	
	I0916 18:19:41.297105  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.300421  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.300971  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.301002  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.301286  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.301515  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.301734  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.301875  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.302107  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.302339  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.302364  399410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:19:41.418336  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:19:41.418386  399410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:19:41.418454  399410 buildroot.go:174] setting up certificates
	I0916 18:19:41.418467  399410 provision.go:84] configureAuth start
	I0916 18:19:41.418488  399410 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:19:41.418784  399410 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:19:41.421305  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.421712  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.421748  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.421991  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.424483  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.424857  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.424884  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.425013  399410 provision.go:143] copyHostCerts
	I0916 18:19:41.425041  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:19:41.425075  399410 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:19:41.425084  399410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:19:41.425150  399410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:19:41.425241  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:19:41.425258  399410 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:19:41.425262  399410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:19:41.425285  399410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:19:41.425328  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:19:41.425344  399410 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:19:41.425350  399410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:19:41.425370  399410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:19:41.425414  399410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438 san=[127.0.0.1 192.168.39.165 ha-365438 localhost minikube]
	I0916 18:19:41.512884  399410 provision.go:177] copyRemoteCerts
	I0916 18:19:41.512974  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:19:41.513000  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.515904  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.516268  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.516296  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.516458  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.516658  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.516816  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.516943  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:19:41.609301  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:19:41.609371  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:19:41.639940  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:19:41.640046  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 18:19:41.669729  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:19:41.669797  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 18:19:41.697347  399410 provision.go:87] duration metric: took 278.861856ms to configureAuth
	I0916 18:19:41.697377  399410 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:19:41.697610  399410 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:19:41.697692  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.700203  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.700618  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.700644  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.700812  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.701021  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.701156  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.701255  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.701379  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.701568  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.701585  399410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:21:12.618321  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:21:12.618374  399410 machine.go:96] duration metric: took 1m31.583683256s to provisionDockerMachine
	I0916 18:21:12.618402  399410 start.go:293] postStartSetup for "ha-365438" (driver="kvm2")
	I0916 18:21:12.618419  399410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:21:12.618449  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.618849  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:21:12.618897  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.622575  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.623110  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.623138  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.623381  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.623614  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.623801  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.623998  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:21:12.713212  399410 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:21:12.718486  399410 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:21:12.718524  399410 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:21:12.718603  399410 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:21:12.718711  399410 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:21:12.718726  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:21:12.718837  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:21:12.729199  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:21:12.755709  399410 start.go:296] duration metric: took 137.286751ms for postStartSetup
	I0916 18:21:12.755763  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.756102  399410 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 18:21:12.756135  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.758817  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.759167  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.759212  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.759363  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.759579  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.759818  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.760002  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	W0916 18:21:12.844343  399410 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 18:21:12.844373  399410 fix.go:56] duration metric: took 1m31.834641864s for fixHost
	I0916 18:21:12.844396  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.847148  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.847615  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.847649  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.847803  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.848010  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.848178  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.848290  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.848445  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:21:12.848675  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:21:12.848686  399410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:21:12.958425  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510872.907942926
	
	I0916 18:21:12.958451  399410 fix.go:216] guest clock: 1726510872.907942926
	I0916 18:21:12.958461  399410 fix.go:229] Guest: 2024-09-16 18:21:12.907942926 +0000 UTC Remote: 2024-09-16 18:21:12.844380126 +0000 UTC m=+91.970613970 (delta=63.5628ms)
	I0916 18:21:12.958490  399410 fix.go:200] guest clock delta is within tolerance: 63.5628ms
	I0916 18:21:12.958497  399410 start.go:83] releasing machines lock for "ha-365438", held for 1m31.948776509s
	I0916 18:21:12.958520  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.958831  399410 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:21:12.961428  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.961868  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.961891  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.962105  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.962749  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.962924  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.963037  399410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:21:12.963087  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.963120  399410 ssh_runner.go:195] Run: cat /version.json
	I0916 18:21:12.963145  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.965812  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.965964  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.966209  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.966239  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.966384  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.966414  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.966420  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.966522  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.966587  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.966652  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.966709  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.966848  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:21:12.966885  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.967055  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:21:13.076791  399410 ssh_runner.go:195] Run: systemctl --version
	I0916 18:21:13.083790  399410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:21:13.252252  399410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:21:13.258683  399410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:21:13.258769  399410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:21:13.269948  399410 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 18:21:13.269983  399410 start.go:495] detecting cgroup driver to use...
	I0916 18:21:13.270066  399410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:21:13.293897  399410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:21:13.311277  399410 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:21:13.311352  399410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:21:13.329239  399410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:21:13.345557  399410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:21:13.499118  399410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:21:13.680893  399410 docker.go:233] disabling docker service ...
	I0916 18:21:13.680987  399410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:21:13.727541  399410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:21:13.788960  399410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:21:14.016465  399410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:21:14.244848  399410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:21:14.268710  399410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:21:14.289169  399410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:21:14.289287  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.300690  399410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:21:14.300777  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.311599  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.322866  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.334643  399410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:21:14.345705  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.356564  399410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.369110  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.380878  399410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:21:14.391062  399410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:21:14.400878  399410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:21:14.557573  399410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:21:24.607106  399410 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.049483074s)
	I0916 18:21:24.607141  399410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:21:24.607204  399410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:21:24.612282  399410 start.go:563] Will wait 60s for crictl version
	I0916 18:21:24.612348  399410 ssh_runner.go:195] Run: which crictl
	I0916 18:21:24.616416  399410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:21:24.656445  399410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:21:24.656555  399410 ssh_runner.go:195] Run: crio --version
	I0916 18:21:24.689180  399410 ssh_runner.go:195] Run: crio --version
	I0916 18:21:24.722546  399410 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:21:24.724097  399410 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:21:24.727225  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:24.727814  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:24.727840  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:24.728080  399410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:21:24.733387  399410 kubeadm.go:883] updating cluster {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 18:21:24.733527  399410 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:21:24.733600  399410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:21:24.784701  399410 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:21:24.784725  399410 crio.go:433] Images already preloaded, skipping extraction
	I0916 18:21:24.784775  399410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:21:24.822301  399410 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:21:24.822328  399410 cache_images.go:84] Images are preloaded, skipping loading
	I0916 18:21:24.822337  399410 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.31.1 crio true true} ...
	I0916 18:21:24.822439  399410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:21:24.822511  399410 ssh_runner.go:195] Run: crio config
	I0916 18:21:24.871266  399410 cni.go:84] Creating CNI manager for ""
	I0916 18:21:24.871297  399410 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 18:21:24.871329  399410 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 18:21:24.871358  399410 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-365438 NodeName:ha-365438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 18:21:24.871528  399410 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-365438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 18:21:24.871559  399410 kube-vip.go:115] generating kube-vip config ...
	I0916 18:21:24.871610  399410 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:21:24.883617  399410 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:21:24.883769  399410 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:21:24.883842  399410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:21:24.894146  399410 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 18:21:24.894228  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 18:21:24.904400  399410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0916 18:21:24.922914  399410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:21:24.941156  399410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0916 18:21:24.959542  399410 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 18:21:24.977487  399410 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:21:24.982931  399410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:21:25.129705  399410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:21:25.145669  399410 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.165
	I0916 18:21:25.145693  399410 certs.go:194] generating shared ca certs ...
	I0916 18:21:25.145719  399410 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:21:25.145895  399410 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:21:25.145961  399410 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:21:25.145976  399410 certs.go:256] generating profile certs ...
	I0916 18:21:25.146079  399410 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:21:25.146113  399410 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287
	I0916 18:21:25.146142  399410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.18 192.168.39.231 192.168.39.254]
	I0916 18:21:25.226318  399410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287 ...
	I0916 18:21:25.226356  399410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287: {Name:mk45ff29a074fb6aefea3420b5f16311d9c2952c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:21:25.226537  399410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287 ...
	I0916 18:21:25.226548  399410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287: {Name:mk47e4ca4bc91020185bcfb115bf39793da29b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:21:25.226617  399410 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:21:25.226798  399410 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:21:25.226939  399410 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:21:25.226955  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:21:25.226968  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:21:25.226981  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:21:25.226994  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:21:25.227007  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:21:25.227020  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:21:25.227032  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:21:25.227043  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:21:25.227091  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:21:25.227121  399410 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:21:25.227130  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:21:25.227150  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:21:25.227170  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:21:25.227190  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:21:25.227225  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:21:25.227250  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.227263  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.227275  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.227875  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:21:25.256474  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:21:25.283947  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:21:25.311628  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:21:25.337090  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 18:21:25.362167  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 18:21:25.388549  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:21:25.413273  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:21:25.438297  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:21:25.463289  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:21:25.488687  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:21:25.514094  399410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 18:21:25.531567  399410 ssh_runner.go:195] Run: openssl version
	I0916 18:21:25.537825  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:21:25.549186  399410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.554226  399410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.554297  399410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.560270  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:21:25.569960  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:21:25.580962  399410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.585883  399410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.585942  399410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.591944  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:21:25.601890  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:21:25.613441  399410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.618211  399410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.618276  399410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.624293  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:21:25.634093  399410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:21:25.639019  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 18:21:25.645122  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 18:21:25.658051  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 18:21:25.664372  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 18:21:25.670510  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 18:21:25.676748  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 18:21:25.682597  399410 kubeadm.go:392] StartCluster: {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:21:25.682733  399410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 18:21:25.682795  399410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 18:21:25.727890  399410 cri.go:89] found id: "786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111"
	I0916 18:21:25.727920  399410 cri.go:89] found id: "255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84"
	I0916 18:21:25.727926  399410 cri.go:89] found id: "d474ffcda9ea3b9dc66adad51f554d9ed54f7fe63b316cbe96c266b7311dc7e3"
	I0916 18:21:25.727931  399410 cri.go:89] found id: "ed95f724866d36c42ba065d18ace22308ee70c657fa2620f1fdcb326cc86b448"
	I0916 18:21:25.727935  399410 cri.go:89] found id: "8ade28da627b4f5198c66ae0f18cf962764bda43c0f4ceedcd43dcea8b1921c2"
	I0916 18:21:25.727940  399410 cri.go:89] found id: "8b22111b2c0ccf4d655ef72908353612f16023abbe0fdc2799d83b3f51a516d9"
	I0916 18:21:25.727944  399410 cri.go:89] found id: "637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d"
	I0916 18:21:25.727947  399410 cri.go:89] found id: "cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61"
	I0916 18:21:25.727951  399410 cri.go:89] found id: "ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d"
	I0916 18:21:25.727958  399410 cri.go:89] found id: "fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8"
	I0916 18:21:25.727963  399410 cri.go:89] found id: "bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803"
	I0916 18:21:25.727967  399410 cri.go:89] found id: "4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804"
	I0916 18:21:25.727971  399410 cri.go:89] found id: "c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486"
	I0916 18:21:25.727976  399410 cri.go:89] found id: "ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce"
	I0916 18:21:25.727984  399410 cri.go:89] found id: "36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6"
	I0916 18:21:25.727989  399410 cri.go:89] found id: ""
	I0916 18:21:25.728045  399410 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.058419861Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511032058336389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44ce7b61-5ae0-4a5f-b42d-35df18f35838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.059426307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7241b9c3-9bfc-4bc4-afca-87d11136ed38 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.059605234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7241b9c3-9bfc-4bc4-afca-87d11136ed38 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.060125339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7241b9c3-9bfc-4bc4-afca-87d11136ed38 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.106625723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49b89135-2675-4fb3-a135-da17153db945 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.106708571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49b89135-2675-4fb3-a135-da17153db945 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.107897134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f307956-ea57-4c59-8be4-9382cd25a7db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.108352183Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511032108325133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f307956-ea57-4c59-8be4-9382cd25a7db name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.109097042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83b5c204-7073-4a99-91b9-890a8c00b124 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.109175530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83b5c204-7073-4a99-91b9-890a8c00b124 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.109649214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83b5c204-7073-4a99-91b9-890a8c00b124 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.160299362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59ad96c5-84de-4977-be71-048021dcef55 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.160380524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59ad96c5-84de-4977-be71-048021dcef55 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.161693694Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3481fcd4-c94f-4908-81ba-f1de66417f1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.162104970Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511032162083563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3481fcd4-c94f-4908-81ba-f1de66417f1c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.162821034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea2f6fd0-0b33-477d-82c9-f55fe55f79ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.162880165Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea2f6fd0-0b33-477d-82c9-f55fe55f79ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.163269449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea2f6fd0-0b33-477d-82c9-f55fe55f79ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.220579147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ae6a5cf-1af3-4502-8214-b6ba69ac80ee name=/runtime.v1.RuntimeService/Version
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.220666377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ae6a5cf-1af3-4502-8214-b6ba69ac80ee name=/runtime.v1.RuntimeService/Version
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.224789969Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d936977-77e8-4472-9fdc-25eaed372cc9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.225294711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511032225253918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d936977-77e8-4472-9fdc-25eaed372cc9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.226342062Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=314afcf3-f279-4abb-9a07-9dd84067a3fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.226652687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=314afcf3-f279-4abb-9a07-9dd84067a3fa name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:23:52 ha-365438 crio[3810]: time="2024-09-16 18:23:52.228051630Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=314afcf3-f279-4abb-9a07-9dd84067a3fa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	001573b7b9638       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   0f90e064c7ae5       storage-provisioner
	62f143f8a5310       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   8ed5e7ef07b86       kube-apiserver-ha-365438
	948266dfd539b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   cfdaad28f3f13       busybox-7dff88458-8lxm5
	c1141e0bd181c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   6d6354e7bb952       kube-controller-manager-ha-365438
	2a9f83ca4e241       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   5123710beac11       kube-vip-ha-365438
	c1340e7dafac3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   2                   fc1d4a333ee93       coredns-7c65d6cfc9-9svk8
	214a069b1af9a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   2                   a9c6610d86981       coredns-7c65d6cfc9-zh7sm
	aa483917c6d08       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   f65201a81c371       kube-proxy-4rfbj
	c6095c73ae01b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   0f90e064c7ae5       storage-provisioner
	2ecc2680e8434       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   5f38da9920a7c       kube-scheduler-ha-365438
	a09ecff4ae95f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   39ddf209c6fc8       kindnet-599gk
	78553547a1583       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   8ed5e7ef07b86       kube-apiserver-ha-365438
	e636b3f9a2c87       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   1dfcb43e6f37f       etcd-ha-365438
	c128f1e941b11       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   6d6354e7bb952       kube-controller-manager-ha-365438
	786c916a75ad8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Exited              coredns                   1                   c4de9c35b0bd6       coredns-7c65d6cfc9-zh7sm
	255453aac7614       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Exited              coredns                   1                   2ada558a992d8       coredns-7c65d6cfc9-9svk8
	1c688c47b509b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   45427fea44b56       busybox-7dff88458-8lxm5
	ae842d37f79ef       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      13 minutes ago       Exited              kindnet-cni               0                   16b1b97f4eee2       kindnet-599gk
	fced6ce81805e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago       Exited              kube-proxy                0                   c7bb352443d32       kube-proxy-4rfbj
	4afcf5ad24d43       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   4415d47ee85c8       kube-scheduler-ha-365438
	ee90a7de312ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   265048ac4715e       etcd-ha-365438
	
	
	==> coredns [214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71] <==
	Trace[459271206]: [10.001811232s] [10.001811232s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:49456->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:49456->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:49466->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:49466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46371 - 37218 "HINFO IN 3877162347587772276.3800904195356776012. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021824973s
	
	
	==> coredns [786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46676 - 17519 "HINFO IN 4735384296121859020.7567965011438161124. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02453665s
	
	
	==> coredns [c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43646->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43646->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43694->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43694->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-365438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T18_10_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:10:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:23:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-365438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 428a6b3869674553b5fa368f548d44fe
	  System UUID:                428a6b38-6967-4553-b5fa-368f548d44fe
	  Boot ID:                    bf6a145c-4c83-434e-832f-5377ceb5d93e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8lxm5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-9svk8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-7c65d6cfc9-zh7sm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-365438                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-599gk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-365438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-365438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-4rfbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-365438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-365438                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           13m                    node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-365438 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Warning  ContainerGCFailed        3m25s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m26s (x4 over 3m40s)  kubelet          Node ha-365438 status is now: NodeNotReady
	  Normal   RegisteredNode           102s                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   RegisteredNode           98s                    node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	
	
	Name:               ha-365438-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_11_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:11:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:23:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-365438-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37dacc83603e40abb19ac133e9d2c030
	  System UUID:                37dacc83-603e-40ab-b19a-c133e9d2c030
	  Boot ID:                    5efe6112-356d-484c-ab2c-4ab05a97dc5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8whmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-365438-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-q2vlq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-365438-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-365438-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nrqvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-365438-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-365438-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-365438-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-365438-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-365438-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  NodeNotReady             8m46s                node-controller  Node ha-365438-m02 status is now: NodeNotReady
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node ha-365438-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           102s                 node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           98s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           35s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	
	
	Name:               ha-365438-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_12_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:12:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:23:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:23:29 +0000   Mon, 16 Sep 2024 18:22:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:23:29 +0000   Mon, 16 Sep 2024 18:22:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:23:29 +0000   Mon, 16 Sep 2024 18:22:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:23:29 +0000   Mon, 16 Sep 2024 18:22:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.231
	  Hostname:    ha-365438-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 113546b28b1a45aca3d715558877ace5
	  System UUID:                113546b2-8b1a-45ac-a3d7-15558877ace5
	  Boot ID:                    54fee668-04f1-493f-a6ce-8ca9af6b6dc6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4hs24                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-365438-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-99gkn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-365438-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-365438-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-mjljp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-365438-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-365438-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-365438-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal   RegisteredNode           102s               node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	  Normal   NodeNotReady             62s                node-controller  Node ha-365438-m03 status is now: NodeNotReady
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 54s                kubelet          Node ha-365438-m03 has been rebooted, boot id: 54fee668-04f1-493f-a6ce-8ca9af6b6dc6
	  Normal   NodeHasSufficientMemory  54s (x2 over 54s)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet          Node ha-365438-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x2 over 54s)  kubelet          Node ha-365438-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                54s                kubelet          Node ha-365438-m03 status is now: NodeReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-365438-m03 event: Registered Node ha-365438-m03 in Controller
	
	
	Name:               ha-365438-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_13_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:13:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:23:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:23:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:23:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:23:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-365438-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a60d15c35e49c89cf5c86d6e9e7127
	  System UUID:                19a60d15-c35e-49c8-9cf5-c86d6e9e7127
	  Boot ID:                    cd6712a5-a899-461f-b6b0-981c66ae101a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gjxct       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-pln82    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-365438-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   NodeReady                9m47s              kubelet          Node ha-365438-m04 status is now: NodeReady
	  Normal   RegisteredNode           102s               node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           98s                node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   NodeNotReady             62s                node-controller  Node ha-365438-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-365438-m04 has been rebooted, boot id: cd6712a5-a899-461f-b6b0-981c66ae101a
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-365438-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-365438-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-365438-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                 kubelet          Node ha-365438-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s                 kubelet          Node ha-365438-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073590] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.211872] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.138357] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.296917] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.171159] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.216894] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.069713] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.331842] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.083288] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.257891] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.535666] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 18:11] kauditd_printk_skb: 26 callbacks suppressed
	[Sep16 18:18] kauditd_printk_skb: 1 callbacks suppressed
	[Sep16 18:21] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.161082] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.321192] systemd-fstab-generator[3650]: Ignoring "noauto" option for root device
	[  +0.220861] systemd-fstab-generator[3718]: Ignoring "noauto" option for root device
	[  +0.357855] systemd-fstab-generator[3802]: Ignoring "noauto" option for root device
	[ +10.572463] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[  +0.091284] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.000394] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.783601] kauditd_printk_skb: 46 callbacks suppressed
	[ +10.069815] kauditd_printk_skb: 1 callbacks suppressed
	[Sep16 18:22] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c] <==
	{"level":"warn","ts":"2024-09-16T18:22:52.658041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"ffc3b7517aaad9f6","from":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-16T18:22:53.764593Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.231:2380/version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:22:53.764694Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:22:55.833862Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"280a274dd8bdbcec","rtt":"0s","error":"dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:22:55.853127Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"280a274dd8bdbcec","rtt":"0s","error":"dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:22:57.767022Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.231:2380/version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:22:57.767236Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:00.835297Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"280a274dd8bdbcec","rtt":"0s","error":"dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:00.853621Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"280a274dd8bdbcec","rtt":"0s","error":"dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:01.770099Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.231:2380/version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:01.770243Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:05.772209Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.231:2380/version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:05.772329Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:05.836032Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"280a274dd8bdbcec","rtt":"0s","error":"dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:05.854533Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"280a274dd8bdbcec","rtt":"0s","error":"dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:09.774555Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.231:2380/version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T18:23:09.774642Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T18:23:10.159116Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.159228Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.162913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.178695Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"280a274dd8bdbcec","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T18:23:10.178849Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.190182Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"280a274dd8bdbcec","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T18:23:10.190315Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:21.703360Z","caller":"traceutil/trace.go:171","msg":"trace[1028167089] transaction","detail":"{read_only:false; response_revision:2479; number_of_response:1; }","duration":"121.996314ms","start":"2024-09-16T18:23:21.581339Z","end":"2024-09-16T18:23:21.703336Z","steps":["trace[1028167089] 'process raft request'  (duration: 118.773665ms)"],"step_count":1}
	
	
	==> etcd [ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce] <==
	{"level":"warn","ts":"2024-09-16T18:19:41.827320Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:19:41.073812Z","time spent":"753.500545ms","remote":"127.0.0.1:35348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	2024/09/16 18:19:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T18:19:41.892638Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:19:41.893016Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T18:19:41.894281Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T18:19:41.894528Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894611Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894686Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894825Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894900Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.895023Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.895047Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895094Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895134Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895234Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895309Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895377Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895421Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.899011Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"warn","ts":"2024-09-16T18:19:41.899032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.8488868s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T18:19:41.899171Z","caller":"traceutil/trace.go:171","msg":"trace[679369831] range","detail":"{range_begin:; range_end:; }","duration":"8.849034852s","start":"2024-09-16T18:19:33.050122Z","end":"2024-09-16T18:19:41.899157Z","steps":["trace[679369831] 'agreement among raft nodes before linearized reading'  (duration: 8.848885634s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T18:19:41.899223Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-09-16T18:19:41.899253Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-365438","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"error","ts":"2024-09-16T18:19:41.899294Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 18:23:53 up 14 min,  0 users,  load average: 0.38, 0.55, 0.33
	Linux ha-365438 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122] <==
	I0916 18:23:21.493186       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:23:31.490363       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:23:31.490420       1 main.go:299] handling current node
	I0916 18:23:31.490448       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:23:31.490454       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:23:31.490657       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:23:31.490684       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:23:31.490802       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:23:31.490829       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:23:41.498680       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:23:41.498845       1 main.go:299] handling current node
	I0916 18:23:41.498886       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:23:41.498911       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:23:41.499061       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:23:41.499106       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:23:41.499210       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:23:41.499236       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:23:51.490037       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:23:51.490096       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:23:51.490218       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:23:51.490224       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:23:51.490330       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:23:51.490336       1 main.go:299] handling current node
	I0916 18:23:51.490360       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:23:51.490364       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d] <==
	I0916 18:19:03.849323       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:13.857726       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:19:13.857806       1 main.go:299] handling current node
	I0916 18:19:13.857855       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:19:13.857862       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:19:13.858013       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:19:13.858041       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:19:13.858097       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:19:13.858120       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:23.850414       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:19:23.850527       1 main.go:299] handling current node
	I0916 18:19:23.850566       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:19:23.850575       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:19:23.850775       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:19:23.850805       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:19:23.850864       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:19:23.850887       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:33.848775       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:19:33.848898       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:33.849093       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:19:33.849119       1 main.go:299] handling current node
	I0916 18:19:33.849141       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:19:33.849158       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:19:33.849221       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:19:33.849239       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847] <==
	I0916 18:22:11.137335       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 18:22:11.181195       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 18:22:11.181314       1 policy_source.go:224] refreshing policies
	I0916 18:22:11.200969       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 18:22:11.201362       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 18:22:11.201687       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 18:22:11.202014       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 18:22:11.203626       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 18:22:11.203861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 18:22:11.203880       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 18:22:11.205711       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 18:22:11.211718       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0916 18:22:11.224670       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18 192.168.39.231]
	I0916 18:22:11.226101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 18:22:11.235577       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 18:22:11.235893       1 aggregator.go:171] initial CRD sync complete...
	I0916 18:22:11.236158       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 18:22:11.236186       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 18:22:11.236195       1 cache.go:39] Caches are synced for autoregister controller
	I0916 18:22:11.239304       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 18:22:11.247992       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 18:22:11.273729       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 18:22:12.108190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 18:22:12.564644       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.18 192.168.39.231]
	W0916 18:22:22.563948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.18]
	
	
	==> kube-apiserver [78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0] <==
	I0916 18:21:30.562192       1 options.go:228] external host was not specified, using 192.168.39.165
	I0916 18:21:30.579189       1 server.go:142] Version: v1.31.1
	I0916 18:21:30.579267       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:21:31.197949       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 18:21:31.208068       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 18:21:31.211938       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 18:21:31.211960       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 18:21:31.212191       1 instance.go:232] Using reconciler: lease
	W0916 18:21:51.197801       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 18:21:51.197800       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 18:21:51.213822       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0916 18:21:51.213898       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118] <==
	I0916 18:22:34.680623       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.358µs"
	I0916 18:22:50.664141       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:22:50.664561       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	I0916 18:22:50.667851       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:22:50.703238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:22:50.703293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:22:50.714095       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.128888ms"
	I0916 18:22:50.718346       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="126.251µs"
	I0916 18:22:54.340517       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:22:56.024948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:22:57.136960       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m02"
	I0916 18:22:58.437045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:22:58.452210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:22:59.303649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:22:59.390174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.669µs"
	I0916 18:23:04.423284       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:23:17.700883       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:23:17.804307       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:23:18.042872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.429975ms"
	I0916 18:23:18.044058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="125.623µs"
	I0916 18:23:29.469816       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:23:44.742976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:23:44.743225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	I0916 18:23:44.771780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:23:45.967031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	
	
	==> kube-controller-manager [c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827] <==
	I0916 18:21:31.403314       1 serving.go:386] Generated self-signed cert in-memory
	I0916 18:21:31.764335       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 18:21:31.764375       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:21:31.765956       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 18:21:31.766093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 18:21:31.766247       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 18:21:31.766404       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0916 18:21:52.221289       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.165:8443/healthz\": dial tcp 192.168.39.165:8443: connect: connection refused"
	
	
	==> kube-proxy [aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:21:34.896069       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:37.969418       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:41.040907       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:47.184957       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:56.400189       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 18:22:13.356450       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E0916 18:22:13.356755       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:22:13.438161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:22:13.438322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:22:13.438381       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:22:13.441495       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:22:13.442005       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:22:13.442239       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:22:13.444707       1 config.go:199] "Starting service config controller"
	I0916 18:22:13.444811       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:22:13.444901       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:22:13.444932       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:22:13.446205       1 config.go:328] "Starting node config controller"
	I0916 18:22:13.446294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:22:13.545979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 18:22:13.546009       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:22:13.546547       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8] <==
	E0916 18:18:26.096016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:26.096125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:26.096225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:33.647870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:33.649099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:33.648124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0916 18:18:33.649217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:33.649191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0916 18:18:33.649274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:42.864741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:42.864910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:42.865000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:42.865122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:45.937931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:45.938181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:04.369742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:04.369868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:07.439946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:07.440078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:07.440007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:07.440652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:38.161714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:38.161805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:41.231907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:41.232006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65] <==
	W0916 18:22:02.894365       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:02.894584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.165:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:06.944318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.165:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:06.944540       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.165:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:08.151635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:08.151707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:08.151636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.165:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:08.151781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:09.122423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:09.122568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:09.377690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.165:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:09.378294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.165:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:09.386149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:09.386242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:11.145165       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 18:22:11.145238       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 18:22:11.189012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 18:22:11.189071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:22:11.189202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 18:22:11.189233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:22:11.189289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 18:22:11.189321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:22:11.189767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 18:22:11.193038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 18:22:34.031969       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804] <==
	I0916 18:12:38.217215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-99gkn" node="ha-365438-m03"
	I0916 18:13:06.562653       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="f2ef0616-2379-49c3-af53-b3779fb4448f" pod="default/busybox-7dff88458-4hs24" assumedNode="ha-365438-m03" currentNode="ha-365438-m02"
	E0916 18:13:06.587442       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4hs24" node="ha-365438-m02"
	E0916 18:13:06.587523       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f2ef0616-2379-49c3-af53-b3779fb4448f(default/busybox-7dff88458-4hs24) was assumed on ha-365438-m02 but assigned to ha-365438-m03" pod="default/busybox-7dff88458-4hs24"
	E0916 18:13:06.587555       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" pod="default/busybox-7dff88458-4hs24"
	I0916 18:13:06.587578       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4hs24" node="ha-365438-m03"
	E0916 18:13:06.618090       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	E0916 18:13:06.618528       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 11bd1f64-d695-4fc7-bec9-5694a7552fdf(default/busybox-7dff88458-8whmx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8whmx"
	E0916 18:13:06.618607       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" pod="default/busybox-7dff88458-8whmx"
	I0916 18:13:06.618663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	E0916 18:19:32.387108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 18:19:33.449916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0916 18:19:33.482561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 18:19:35.618140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0916 18:19:38.512116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 18:19:38.595642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 18:19:38.787364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0916 18:19:39.377753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0916 18:19:40.181265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0916 18:19:40.411295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 18:19:41.343359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	I0916 18:19:41.804892       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 18:19:41.805144       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 18:19:41.805398       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 18:19:41.805408       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 16 18:22:27 ha-365438 kubelet[1307]: E0916 18:22:27.406063    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510947405719495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:22:37 ha-365438 kubelet[1307]: E0916 18:22:37.412579    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510957408164472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:22:37 ha-365438 kubelet[1307]: E0916 18:22:37.412640    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510957408164472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:22:47 ha-365438 kubelet[1307]: E0916 18:22:47.414657    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510967414198468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:22:47 ha-365438 kubelet[1307]: E0916 18:22:47.415020    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510967414198468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:22:55 ha-365438 kubelet[1307]: I0916 18:22:55.215085    1307 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-365438" podUID="f3ed96ad-c5a8-4e6c-90a8-4ee1fa4d9bc4"
	Sep 16 18:22:55 ha-365438 kubelet[1307]: I0916 18:22:55.236785    1307 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-365438"
	Sep 16 18:22:57 ha-365438 kubelet[1307]: I0916 18:22:57.235212    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-365438" podStartSLOduration=2.235180768 podStartE2EDuration="2.235180768s" podCreationTimestamp="2024-09-16 18:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 18:22:57.234675407 +0000 UTC m=+750.220154139" watchObservedRunningTime="2024-09-16 18:22:57.235180768 +0000 UTC m=+750.220659497"
	Sep 16 18:22:57 ha-365438 kubelet[1307]: E0916 18:22:57.417097    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510977416780109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:22:57 ha-365438 kubelet[1307]: E0916 18:22:57.417139    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510977416780109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:07 ha-365438 kubelet[1307]: E0916 18:23:07.420116    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510987419674521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:07 ha-365438 kubelet[1307]: E0916 18:23:07.420184    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510987419674521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:17 ha-365438 kubelet[1307]: E0916 18:23:17.423850    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510997423125442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:17 ha-365438 kubelet[1307]: E0916 18:23:17.423912    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726510997423125442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:27 ha-365438 kubelet[1307]: E0916 18:23:27.257611    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:23:27 ha-365438 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:23:27 ha-365438 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:23:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:23:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:23:27 ha-365438 kubelet[1307]: E0916 18:23:27.426258    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511007425857878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:27 ha-365438 kubelet[1307]: E0916 18:23:27.426297    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511007425857878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:37 ha-365438 kubelet[1307]: E0916 18:23:37.429273    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511017428704378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:37 ha-365438 kubelet[1307]: E0916 18:23:37.429318    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511017428704378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:47 ha-365438 kubelet[1307]: E0916 18:23:47.431716    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511027431015220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:23:47 ha-365438 kubelet[1307]: E0916 18:23:47.431773    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511027431015220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 18:23:51.731159  400813 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19649-371203/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-365438 -n ha-365438
helpers_test.go:261: (dbg) Run:  kubectl --context ha-365438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 stop -v=7 --alsologtostderr: exit status 82 (2m0.469501249s)

                                                
                                                
-- stdout --
	* Stopping node "ha-365438-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:24:11.365460  401224 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:24:11.365725  401224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:24:11.365735  401224 out.go:358] Setting ErrFile to fd 2...
	I0916 18:24:11.365739  401224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:24:11.365904  401224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:24:11.366123  401224 out.go:352] Setting JSON to false
	I0916 18:24:11.366203  401224 mustload.go:65] Loading cluster: ha-365438
	I0916 18:24:11.366604  401224 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:24:11.366685  401224 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:24:11.366870  401224 mustload.go:65] Loading cluster: ha-365438
	I0916 18:24:11.367000  401224 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:24:11.367021  401224 stop.go:39] StopHost: ha-365438-m04
	I0916 18:24:11.367444  401224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:24:11.367488  401224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:24:11.383456  401224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0916 18:24:11.383984  401224 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:24:11.384693  401224 main.go:141] libmachine: Using API Version  1
	I0916 18:24:11.384722  401224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:24:11.385113  401224 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:24:11.388312  401224 out.go:177] * Stopping node "ha-365438-m04"  ...
	I0916 18:24:11.389470  401224 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0916 18:24:11.389499  401224 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:24:11.389754  401224 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0916 18:24:11.389789  401224 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:24:11.392543  401224 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:24:11.392969  401224 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:23:39 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:24:11.393003  401224 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:24:11.393185  401224 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:24:11.393350  401224 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:24:11.393512  401224 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:24:11.393648  401224 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	I0916 18:24:11.476929  401224 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0916 18:24:11.531447  401224 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0916 18:24:11.584751  401224 main.go:141] libmachine: Stopping "ha-365438-m04"...
	I0916 18:24:11.584789  401224 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:24:11.586502  401224 main.go:141] libmachine: (ha-365438-m04) Calling .Stop
	I0916 18:24:11.589753  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 0/120
	I0916 18:24:12.591369  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 1/120
	I0916 18:24:13.592816  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 2/120
	I0916 18:24:14.594627  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 3/120
	I0916 18:24:15.595953  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 4/120
	I0916 18:24:16.597983  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 5/120
	I0916 18:24:17.599334  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 6/120
	I0916 18:24:18.600978  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 7/120
	I0916 18:24:19.602552  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 8/120
	I0916 18:24:20.603961  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 9/120
	I0916 18:24:21.605678  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 10/120
	I0916 18:24:22.607432  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 11/120
	I0916 18:24:23.608763  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 12/120
	I0916 18:24:24.610417  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 13/120
	I0916 18:24:25.611788  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 14/120
	I0916 18:24:26.613819  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 15/120
	I0916 18:24:27.615482  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 16/120
	I0916 18:24:28.616867  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 17/120
	I0916 18:24:29.618325  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 18/120
	I0916 18:24:30.619977  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 19/120
	I0916 18:24:31.622334  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 20/120
	I0916 18:24:32.623749  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 21/120
	I0916 18:24:33.625288  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 22/120
	I0916 18:24:34.627676  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 23/120
	I0916 18:24:35.628954  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 24/120
	I0916 18:24:36.630284  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 25/120
	I0916 18:24:37.631607  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 26/120
	I0916 18:24:38.633103  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 27/120
	I0916 18:24:39.634479  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 28/120
	I0916 18:24:40.635780  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 29/120
	I0916 18:24:41.637785  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 30/120
	I0916 18:24:42.639061  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 31/120
	I0916 18:24:43.640772  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 32/120
	I0916 18:24:44.642209  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 33/120
	I0916 18:24:45.643667  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 34/120
	I0916 18:24:46.645227  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 35/120
	I0916 18:24:47.646530  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 36/120
	I0916 18:24:48.648061  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 37/120
	I0916 18:24:49.649452  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 38/120
	I0916 18:24:50.650650  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 39/120
	I0916 18:24:51.652358  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 40/120
	I0916 18:24:52.653835  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 41/120
	I0916 18:24:53.655406  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 42/120
	I0916 18:24:54.656901  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 43/120
	I0916 18:24:55.658476  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 44/120
	I0916 18:24:56.660529  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 45/120
	I0916 18:24:57.662052  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 46/120
	I0916 18:24:58.663413  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 47/120
	I0916 18:24:59.664736  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 48/120
	I0916 18:25:00.666366  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 49/120
	I0916 18:25:01.668728  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 50/120
	I0916 18:25:02.670111  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 51/120
	I0916 18:25:03.671567  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 52/120
	I0916 18:25:04.673067  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 53/120
	I0916 18:25:05.674971  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 54/120
	I0916 18:25:06.676982  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 55/120
	I0916 18:25:07.678389  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 56/120
	I0916 18:25:08.679811  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 57/120
	I0916 18:25:09.681112  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 58/120
	I0916 18:25:10.682715  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 59/120
	I0916 18:25:11.684985  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 60/120
	I0916 18:25:12.686481  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 61/120
	I0916 18:25:13.688236  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 62/120
	I0916 18:25:14.690047  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 63/120
	I0916 18:25:15.691806  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 64/120
	I0916 18:25:16.693284  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 65/120
	I0916 18:25:17.695269  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 66/120
	I0916 18:25:18.697732  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 67/120
	I0916 18:25:19.699596  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 68/120
	I0916 18:25:20.701124  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 69/120
	I0916 18:25:21.702591  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 70/120
	I0916 18:25:22.704062  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 71/120
	I0916 18:25:23.705506  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 72/120
	I0916 18:25:24.707550  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 73/120
	I0916 18:25:25.709070  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 74/120
	I0916 18:25:26.710912  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 75/120
	I0916 18:25:27.712313  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 76/120
	I0916 18:25:28.713743  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 77/120
	I0916 18:25:29.715462  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 78/120
	I0916 18:25:30.716716  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 79/120
	I0916 18:25:31.718891  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 80/120
	I0916 18:25:32.720132  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 81/120
	I0916 18:25:33.721470  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 82/120
	I0916 18:25:34.723374  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 83/120
	I0916 18:25:35.724841  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 84/120
	I0916 18:25:36.726885  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 85/120
	I0916 18:25:37.728161  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 86/120
	I0916 18:25:38.730134  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 87/120
	I0916 18:25:39.731286  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 88/120
	I0916 18:25:40.732623  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 89/120
	I0916 18:25:41.734853  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 90/120
	I0916 18:25:42.736258  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 91/120
	I0916 18:25:43.737485  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 92/120
	I0916 18:25:44.739252  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 93/120
	I0916 18:25:45.740569  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 94/120
	I0916 18:25:46.742140  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 95/120
	I0916 18:25:47.743502  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 96/120
	I0916 18:25:48.744978  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 97/120
	I0916 18:25:49.746161  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 98/120
	I0916 18:25:50.747662  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 99/120
	I0916 18:25:51.749914  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 100/120
	I0916 18:25:52.751432  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 101/120
	I0916 18:25:53.752770  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 102/120
	I0916 18:25:54.754196  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 103/120
	I0916 18:25:55.755438  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 104/120
	I0916 18:25:56.757508  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 105/120
	I0916 18:25:57.758757  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 106/120
	I0916 18:25:58.760087  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 107/120
	I0916 18:25:59.761318  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 108/120
	I0916 18:26:00.762600  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 109/120
	I0916 18:26:01.764883  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 110/120
	I0916 18:26:02.766187  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 111/120
	I0916 18:26:03.767594  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 112/120
	I0916 18:26:04.768836  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 113/120
	I0916 18:26:05.770470  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 114/120
	I0916 18:26:06.772479  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 115/120
	I0916 18:26:07.774598  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 116/120
	I0916 18:26:08.775840  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 117/120
	I0916 18:26:09.777142  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 118/120
	I0916 18:26:10.779727  401224 main.go:141] libmachine: (ha-365438-m04) Waiting for machine to stop 119/120
	I0916 18:26:11.780972  401224 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0916 18:26:11.781065  401224 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0916 18:26:11.783606  401224 out.go:201] 
	W0916 18:26:11.785134  401224 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0916 18:26:11.785154  401224 out.go:270] * 
	* 
	W0916 18:26:11.788478  401224 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 18:26:11.789940  401224 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-365438 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr: exit status 3 (18.981745083s)

                                                
                                                
-- stdout --
	ha-365438
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-365438-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:26:11.840496  401655 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:26:11.840627  401655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:26:11.840639  401655 out.go:358] Setting ErrFile to fd 2...
	I0916 18:26:11.840646  401655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:26:11.840872  401655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:26:11.841116  401655 out.go:352] Setting JSON to false
	I0916 18:26:11.841150  401655 mustload.go:65] Loading cluster: ha-365438
	I0916 18:26:11.841251  401655 notify.go:220] Checking for updates...
	I0916 18:26:11.841706  401655 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:26:11.841739  401655 status.go:255] checking status of ha-365438 ...
	I0916 18:26:11.842225  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:11.842291  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:11.865628  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0916 18:26:11.866305  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:11.867025  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:11.867052  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:11.867561  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:11.867822  401655 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:26:11.869942  401655 status.go:330] ha-365438 host status = "Running" (err=<nil>)
	I0916 18:26:11.869963  401655 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:26:11.870259  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:11.870296  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:11.886564  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37595
	I0916 18:26:11.887134  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:11.887700  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:11.887727  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:11.888046  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:11.888236  401655 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:26:11.891406  401655 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:26:11.891885  401655 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:26:11.891916  401655 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:26:11.892049  401655 host.go:66] Checking if "ha-365438" exists ...
	I0916 18:26:11.892343  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:11.892403  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:11.907917  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0916 18:26:11.908417  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:11.909085  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:11.909113  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:11.909485  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:11.909676  401655 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:26:11.909847  401655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:26:11.909880  401655 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:26:11.912697  401655 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:26:11.913147  401655 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:26:11.913175  401655 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:26:11.913274  401655 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:26:11.913466  401655 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:26:11.913603  401655 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:26:11.913736  401655 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:26:12.002635  401655 ssh_runner.go:195] Run: systemctl --version
	I0916 18:26:12.010980  401655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:26:12.032297  401655 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:26:12.032340  401655 api_server.go:166] Checking apiserver status ...
	I0916 18:26:12.032373  401655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:26:12.051294  401655 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5144/cgroup
	W0916 18:26:12.063110  401655 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5144/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:26:12.063175  401655 ssh_runner.go:195] Run: ls
	I0916 18:26:12.068783  401655 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:26:12.073263  401655 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:26:12.073296  401655 status.go:422] ha-365438 apiserver status = Running (err=<nil>)
	I0916 18:26:12.073307  401655 status.go:257] ha-365438 status: &{Name:ha-365438 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:26:12.073344  401655 status.go:255] checking status of ha-365438-m02 ...
	I0916 18:26:12.073676  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:12.073742  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:12.089036  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0916 18:26:12.089529  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:12.090020  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:12.090040  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:12.090414  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:12.090619  401655 main.go:141] libmachine: (ha-365438-m02) Calling .GetState
	I0916 18:26:12.092371  401655 status.go:330] ha-365438-m02 host status = "Running" (err=<nil>)
	I0916 18:26:12.092403  401655 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:26:12.092694  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:12.092731  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:12.109051  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0916 18:26:12.109543  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:12.110156  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:12.110183  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:12.110559  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:12.110847  401655 main.go:141] libmachine: (ha-365438-m02) Calling .GetIP
	I0916 18:26:12.114225  401655 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:26:12.114716  401655 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:21:37 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:26:12.114750  401655 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:26:12.114953  401655 host.go:66] Checking if "ha-365438-m02" exists ...
	I0916 18:26:12.115502  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:12.115573  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:12.131157  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0916 18:26:12.131675  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:12.132218  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:12.132240  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:12.132565  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:12.132772  401655 main.go:141] libmachine: (ha-365438-m02) Calling .DriverName
	I0916 18:26:12.133002  401655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:26:12.133027  401655 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHHostname
	I0916 18:26:12.135738  401655 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:26:12.136182  401655 main.go:141] libmachine: (ha-365438-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:b2:f7", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:21:37 +0000 UTC Type:0 Mac:52:54:00:e9:b2:f7 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:ha-365438-m02 Clientid:01:52:54:00:e9:b2:f7}
	I0916 18:26:12.136211  401655 main.go:141] libmachine: (ha-365438-m02) DBG | domain ha-365438-m02 has defined IP address 192.168.39.18 and MAC address 52:54:00:e9:b2:f7 in network mk-ha-365438
	I0916 18:26:12.136386  401655 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHPort
	I0916 18:26:12.136559  401655 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHKeyPath
	I0916 18:26:12.136705  401655 main.go:141] libmachine: (ha-365438-m02) Calling .GetSSHUsername
	I0916 18:26:12.136853  401655 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m02/id_rsa Username:docker}
	I0916 18:26:12.224291  401655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:26:12.245501  401655 kubeconfig.go:125] found "ha-365438" server: "https://192.168.39.254:8443"
	I0916 18:26:12.245538  401655 api_server.go:166] Checking apiserver status ...
	I0916 18:26:12.245594  401655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:26:12.263196  401655 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	W0916 18:26:12.279007  401655 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:26:12.279084  401655 ssh_runner.go:195] Run: ls
	I0916 18:26:12.284714  401655 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0916 18:26:12.292144  401655 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0916 18:26:12.292172  401655 status.go:422] ha-365438-m02 apiserver status = Running (err=<nil>)
	I0916 18:26:12.292181  401655 status.go:257] ha-365438-m02 status: &{Name:ha-365438-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:26:12.292199  401655 status.go:255] checking status of ha-365438-m04 ...
	I0916 18:26:12.292547  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:12.292590  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:12.309038  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36827
	I0916 18:26:12.309507  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:12.310257  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:12.310283  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:12.310639  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:12.310880  401655 main.go:141] libmachine: (ha-365438-m04) Calling .GetState
	I0916 18:26:12.312437  401655 status.go:330] ha-365438-m04 host status = "Running" (err=<nil>)
	I0916 18:26:12.312457  401655 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:26:12.312745  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:12.312793  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:12.329335  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I0916 18:26:12.329889  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:12.330430  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:12.330459  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:12.330865  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:12.331123  401655 main.go:141] libmachine: (ha-365438-m04) Calling .GetIP
	I0916 18:26:12.334983  401655 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:26:12.335521  401655 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:23:39 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:26:12.335570  401655 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:26:12.335757  401655 host.go:66] Checking if "ha-365438-m04" exists ...
	I0916 18:26:12.336095  401655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:26:12.336158  401655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:26:12.352735  401655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34605
	I0916 18:26:12.353364  401655 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:26:12.353983  401655 main.go:141] libmachine: Using API Version  1
	I0916 18:26:12.354012  401655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:26:12.354435  401655 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:26:12.354682  401655 main.go:141] libmachine: (ha-365438-m04) Calling .DriverName
	I0916 18:26:12.354925  401655 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:26:12.354955  401655 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHHostname
	I0916 18:26:12.358492  401655 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:26:12.359204  401655 main.go:141] libmachine: (ha-365438-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:d7:69", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:23:39 +0000 UTC Type:0 Mac:52:54:00:65:d7:69 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-365438-m04 Clientid:01:52:54:00:65:d7:69}
	I0916 18:26:12.359249  401655 main.go:141] libmachine: (ha-365438-m04) DBG | domain ha-365438-m04 has defined IP address 192.168.39.27 and MAC address 52:54:00:65:d7:69 in network mk-ha-365438
	I0916 18:26:12.359480  401655 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHPort
	I0916 18:26:12.359740  401655 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHKeyPath
	I0916 18:26:12.359913  401655 main.go:141] libmachine: (ha-365438-m04) Calling .GetSSHUsername
	I0916 18:26:12.360046  401655 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438-m04/id_rsa Username:docker}
	W0916 18:26:30.773150  401655 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.27:22: connect: no route to host
	W0916 18:26:30.773272  401655 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	E0916 18:26:30.773290  401655 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host
	I0916 18:26:30.773298  401655 status.go:257] ha-365438-m04 status: &{Name:ha-365438-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0916 18:26:30.773318  401655 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.27:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-365438 -n ha-365438
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-365438 logs -n 25: (1.842468595s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m04 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp testdata/cp-test.txt                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438:/home/docker/cp-test_ha-365438-m04_ha-365438.txt                       |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438 sudo cat                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438.txt                                 |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m02:/home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m02 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m03:/home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n                                                                 | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | ha-365438-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-365438 ssh -n ha-365438-m03 sudo cat                                          | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC | 16 Sep 24 18:14 UTC |
	|         | /home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-365438 node stop m02 -v=7                                                     | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-365438 node start m02 -v=7                                                    | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-365438 -v=7                                                           | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-365438 -v=7                                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-365438 --wait=true -v=7                                                    | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:19 UTC | 16 Sep 24 18:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-365438                                                                | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:23 UTC |                     |
	| node    | ha-365438 node delete m03 -v=7                                                   | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:23 UTC | 16 Sep 24 18:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-365438 stop -v=7                                                              | ha-365438 | jenkins | v1.34.0 | 16 Sep 24 18:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 18:19:40
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 18:19:40.912099  399410 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:19:40.912414  399410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:19:40.912425  399410 out.go:358] Setting ErrFile to fd 2...
	I0916 18:19:40.912431  399410 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:19:40.912605  399410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:19:40.913213  399410 out.go:352] Setting JSON to false
	I0916 18:19:40.914236  399410 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":7324,"bootTime":1726503457,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:19:40.914356  399410 start.go:139] virtualization: kvm guest
	I0916 18:19:40.916741  399410 out.go:177] * [ha-365438] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:19:40.919463  399410 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:19:40.919503  399410 notify.go:220] Checking for updates...
	I0916 18:19:40.921607  399410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:19:40.922830  399410 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:19:40.924238  399410 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:19:40.925922  399410 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:19:40.927166  399410 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:19:40.928911  399410 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:19:40.929136  399410 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:19:40.929824  399410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:19:40.929882  399410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:19:40.945726  399410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45645
	I0916 18:19:40.946243  399410 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:19:40.946931  399410 main.go:141] libmachine: Using API Version  1
	I0916 18:19:40.946952  399410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:19:40.947312  399410 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:19:40.947522  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:19:40.986345  399410 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 18:19:40.987846  399410 start.go:297] selected driver: kvm2
	I0916 18:19:40.987870  399410 start.go:901] validating driver "kvm2" against &{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:19:40.988019  399410 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:19:40.988379  399410 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:19:40.988477  399410 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 18:19:41.004742  399410 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 18:19:41.005475  399410 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:19:41.005540  399410 cni.go:84] Creating CNI manager for ""
	I0916 18:19:41.005623  399410 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 18:19:41.005694  399410 start.go:340] cluster config:
	{Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:19:41.005821  399410 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:19:41.007899  399410 out.go:177] * Starting "ha-365438" primary control-plane node in "ha-365438" cluster
	I0916 18:19:41.009138  399410 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:19:41.009187  399410 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 18:19:41.009204  399410 cache.go:56] Caching tarball of preloaded images
	I0916 18:19:41.009310  399410 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:19:41.009322  399410 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:19:41.009455  399410 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/config.json ...
	I0916 18:19:41.009666  399410 start.go:360] acquireMachinesLock for ha-365438: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:19:41.009711  399410 start.go:364] duration metric: took 23.815µs to acquireMachinesLock for "ha-365438"
	I0916 18:19:41.009725  399410 start.go:96] Skipping create...Using existing machine configuration
	I0916 18:19:41.009731  399410 fix.go:54] fixHost starting: 
	I0916 18:19:41.009987  399410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:19:41.010021  399410 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:19:41.027086  399410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0916 18:19:41.027644  399410 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:19:41.028212  399410 main.go:141] libmachine: Using API Version  1
	I0916 18:19:41.028235  399410 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:19:41.028631  399410 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:19:41.028851  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:19:41.029052  399410 main.go:141] libmachine: (ha-365438) Calling .GetState
	I0916 18:19:41.030872  399410 fix.go:112] recreateIfNeeded on ha-365438: state=Running err=<nil>
	W0916 18:19:41.030900  399410 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 18:19:41.033154  399410 out.go:177] * Updating the running kvm2 "ha-365438" VM ...
	I0916 18:19:41.034648  399410 machine.go:93] provisionDockerMachine start ...
	I0916 18:19:41.034683  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:19:41.034992  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.038047  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.038602  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.038629  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.038756  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.038942  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.039067  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.039250  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.039435  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.039626  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.039639  399410 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 18:19:41.158482  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438
	
	I0916 18:19:41.158515  399410 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:19:41.158796  399410 buildroot.go:166] provisioning hostname "ha-365438"
	I0916 18:19:41.158830  399410 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:19:41.159043  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.161940  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.162335  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.162357  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.162571  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.162771  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.162913  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.163044  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.163187  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.163384  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.163396  399410 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-365438 && echo "ha-365438" | sudo tee /etc/hostname
	I0916 18:19:41.297073  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-365438
	
	I0916 18:19:41.297105  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.300421  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.300971  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.301002  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.301286  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.301515  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.301734  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.301875  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.302107  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.302339  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.302364  399410 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-365438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-365438/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-365438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:19:41.418336  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:19:41.418386  399410 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:19:41.418454  399410 buildroot.go:174] setting up certificates
	I0916 18:19:41.418467  399410 provision.go:84] configureAuth start
	I0916 18:19:41.418488  399410 main.go:141] libmachine: (ha-365438) Calling .GetMachineName
	I0916 18:19:41.418784  399410 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:19:41.421305  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.421712  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.421748  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.421991  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.424483  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.424857  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.424884  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.425013  399410 provision.go:143] copyHostCerts
	I0916 18:19:41.425041  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:19:41.425075  399410 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:19:41.425084  399410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:19:41.425150  399410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:19:41.425241  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:19:41.425258  399410 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:19:41.425262  399410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:19:41.425285  399410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:19:41.425328  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:19:41.425344  399410 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:19:41.425350  399410 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:19:41.425370  399410 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:19:41.425414  399410 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.ha-365438 san=[127.0.0.1 192.168.39.165 ha-365438 localhost minikube]
	I0916 18:19:41.512884  399410 provision.go:177] copyRemoteCerts
	I0916 18:19:41.512974  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:19:41.513000  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.515904  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.516268  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.516296  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.516458  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.516658  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.516816  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.516943  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:19:41.609301  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:19:41.609371  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:19:41.639940  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:19:41.640046  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 18:19:41.669729  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:19:41.669797  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 18:19:41.697347  399410 provision.go:87] duration metric: took 278.861856ms to configureAuth
	I0916 18:19:41.697377  399410 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:19:41.697610  399410 config.go:182] Loaded profile config "ha-365438": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:19:41.697692  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:19:41.700203  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.700618  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:19:41.700644  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:19:41.700812  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:19:41.701021  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.701156  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:19:41.701255  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:19:41.701379  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:19:41.701568  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:19:41.701585  399410 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:21:12.618321  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:21:12.618374  399410 machine.go:96] duration metric: took 1m31.583683256s to provisionDockerMachine
	I0916 18:21:12.618402  399410 start.go:293] postStartSetup for "ha-365438" (driver="kvm2")
	I0916 18:21:12.618419  399410 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:21:12.618449  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.618849  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:21:12.618897  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.622575  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.623110  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.623138  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.623381  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.623614  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.623801  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.623998  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:21:12.713212  399410 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:21:12.718486  399410 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:21:12.718524  399410 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:21:12.718603  399410 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:21:12.718711  399410 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:21:12.718726  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:21:12.718837  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:21:12.729199  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:21:12.755709  399410 start.go:296] duration metric: took 137.286751ms for postStartSetup
	I0916 18:21:12.755763  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.756102  399410 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0916 18:21:12.756135  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.758817  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.759167  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.759212  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.759363  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.759579  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.759818  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.760002  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	W0916 18:21:12.844343  399410 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0916 18:21:12.844373  399410 fix.go:56] duration metric: took 1m31.834641864s for fixHost
	I0916 18:21:12.844396  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.847148  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.847615  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.847649  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.847803  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.848010  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.848178  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.848290  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.848445  399410 main.go:141] libmachine: Using SSH client type: native
	I0916 18:21:12.848675  399410 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0916 18:21:12.848686  399410 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:21:12.958425  399410 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726510872.907942926
	
	I0916 18:21:12.958451  399410 fix.go:216] guest clock: 1726510872.907942926
	I0916 18:21:12.958461  399410 fix.go:229] Guest: 2024-09-16 18:21:12.907942926 +0000 UTC Remote: 2024-09-16 18:21:12.844380126 +0000 UTC m=+91.970613970 (delta=63.5628ms)
	I0916 18:21:12.958490  399410 fix.go:200] guest clock delta is within tolerance: 63.5628ms
	I0916 18:21:12.958497  399410 start.go:83] releasing machines lock for "ha-365438", held for 1m31.948776509s
	I0916 18:21:12.958520  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.958831  399410 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:21:12.961428  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.961868  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.961891  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.962105  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.962749  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.962924  399410 main.go:141] libmachine: (ha-365438) Calling .DriverName
	I0916 18:21:12.963037  399410 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:21:12.963087  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.963120  399410 ssh_runner.go:195] Run: cat /version.json
	I0916 18:21:12.963145  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHHostname
	I0916 18:21:12.965812  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.965964  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.966209  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.966239  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.966384  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:12.966414  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.966420  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:12.966522  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHPort
	I0916 18:21:12.966587  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.966652  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHKeyPath
	I0916 18:21:12.966709  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.966848  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:21:12.966885  399410 main.go:141] libmachine: (ha-365438) Calling .GetSSHUsername
	I0916 18:21:12.967055  399410 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/ha-365438/id_rsa Username:docker}
	I0916 18:21:13.076791  399410 ssh_runner.go:195] Run: systemctl --version
	I0916 18:21:13.083790  399410 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:21:13.252252  399410 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 18:21:13.258683  399410 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:21:13.258769  399410 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:21:13.269948  399410 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 18:21:13.269983  399410 start.go:495] detecting cgroup driver to use...
	I0916 18:21:13.270066  399410 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:21:13.293897  399410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:21:13.311277  399410 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:21:13.311352  399410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:21:13.329239  399410 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:21:13.345557  399410 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:21:13.499118  399410 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:21:13.680893  399410 docker.go:233] disabling docker service ...
	I0916 18:21:13.680987  399410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:21:13.727541  399410 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:21:13.788960  399410 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:21:14.016465  399410 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:21:14.244848  399410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:21:14.268710  399410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:21:14.289169  399410 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:21:14.289287  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.300690  399410 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:21:14.300777  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.311599  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.322866  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.334643  399410 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:21:14.345705  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.356564  399410 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.369110  399410 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:21:14.380878  399410 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:21:14.391062  399410 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:21:14.400878  399410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:21:14.557573  399410 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:21:24.607106  399410 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.049483074s)
	I0916 18:21:24.607141  399410 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:21:24.607204  399410 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:21:24.612282  399410 start.go:563] Will wait 60s for crictl version
	I0916 18:21:24.612348  399410 ssh_runner.go:195] Run: which crictl
	I0916 18:21:24.616416  399410 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:21:24.656445  399410 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:21:24.656555  399410 ssh_runner.go:195] Run: crio --version
	I0916 18:21:24.689180  399410 ssh_runner.go:195] Run: crio --version
	I0916 18:21:24.722546  399410 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:21:24.724097  399410 main.go:141] libmachine: (ha-365438) Calling .GetIP
	I0916 18:21:24.727225  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:24.727814  399410 main.go:141] libmachine: (ha-365438) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:6c:bf", ip: ""} in network mk-ha-365438: {Iface:virbr1 ExpiryTime:2024-09-16 19:10:00 +0000 UTC Type:0 Mac:52:54:00:aa:6c:bf Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:ha-365438 Clientid:01:52:54:00:aa:6c:bf}
	I0916 18:21:24.727840  399410 main.go:141] libmachine: (ha-365438) DBG | domain ha-365438 has defined IP address 192.168.39.165 and MAC address 52:54:00:aa:6c:bf in network mk-ha-365438
	I0916 18:21:24.728080  399410 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:21:24.733387  399410 kubeadm.go:883] updating cluster {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 18:21:24.733527  399410 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:21:24.733600  399410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:21:24.784701  399410 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:21:24.784725  399410 crio.go:433] Images already preloaded, skipping extraction
	I0916 18:21:24.784775  399410 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:21:24.822301  399410 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:21:24.822328  399410 cache_images.go:84] Images are preloaded, skipping loading
	I0916 18:21:24.822337  399410 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.31.1 crio true true} ...
	I0916 18:21:24.822439  399410 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-365438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:21:24.822511  399410 ssh_runner.go:195] Run: crio config
	I0916 18:21:24.871266  399410 cni.go:84] Creating CNI manager for ""
	I0916 18:21:24.871297  399410 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 18:21:24.871329  399410 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 18:21:24.871358  399410 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-365438 NodeName:ha-365438 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 18:21:24.871528  399410 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-365438"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 18:21:24.871559  399410 kube-vip.go:115] generating kube-vip config ...
	I0916 18:21:24.871610  399410 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0916 18:21:24.883617  399410 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0916 18:21:24.883769  399410 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 18:21:24.883842  399410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:21:24.894146  399410 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 18:21:24.894228  399410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 18:21:24.904400  399410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0916 18:21:24.922914  399410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:21:24.941156  399410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0916 18:21:24.959542  399410 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0916 18:21:24.977487  399410 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0916 18:21:24.982931  399410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:21:25.129705  399410 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:21:25.145669  399410 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438 for IP: 192.168.39.165
	I0916 18:21:25.145693  399410 certs.go:194] generating shared ca certs ...
	I0916 18:21:25.145719  399410 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:21:25.145895  399410 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:21:25.145961  399410 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:21:25.145976  399410 certs.go:256] generating profile certs ...
	I0916 18:21:25.146079  399410 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/client.key
	I0916 18:21:25.146113  399410 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287
	I0916 18:21:25.146142  399410 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.165 192.168.39.18 192.168.39.231 192.168.39.254]
	I0916 18:21:25.226318  399410 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287 ...
	I0916 18:21:25.226356  399410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287: {Name:mk45ff29a074fb6aefea3420b5f16311d9c2952c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:21:25.226537  399410 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287 ...
	I0916 18:21:25.226548  399410 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287: {Name:mk47e4ca4bc91020185bcfb115bf39793da29b03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:21:25.226617  399410 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt.2d00a287 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt
	I0916 18:21:25.226798  399410 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key.2d00a287 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key
	I0916 18:21:25.226939  399410 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key
	I0916 18:21:25.226955  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:21:25.226968  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:21:25.226981  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:21:25.226994  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:21:25.227007  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:21:25.227020  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:21:25.227032  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:21:25.227043  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:21:25.227091  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:21:25.227121  399410 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:21:25.227130  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:21:25.227150  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:21:25.227170  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:21:25.227190  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:21:25.227225  399410 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:21:25.227250  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.227263  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.227275  399410 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.227875  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:21:25.256474  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:21:25.283947  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:21:25.311628  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:21:25.337090  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 18:21:25.362167  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 18:21:25.388549  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:21:25.413273  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/ha-365438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:21:25.438297  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:21:25.463289  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:21:25.488687  399410 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:21:25.514094  399410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 18:21:25.531567  399410 ssh_runner.go:195] Run: openssl version
	I0916 18:21:25.537825  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:21:25.549186  399410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.554226  399410 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.554297  399410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:21:25.560270  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:21:25.569960  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:21:25.580962  399410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.585883  399410 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.585942  399410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:21:25.591944  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:21:25.601890  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:21:25.613441  399410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.618211  399410 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.618276  399410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:21:25.624293  399410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:21:25.634093  399410 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:21:25.639019  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 18:21:25.645122  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 18:21:25.658051  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 18:21:25.664372  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 18:21:25.670510  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 18:21:25.676748  399410 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 18:21:25.682597  399410 kubeadm.go:392] StartCluster: {Name:ha-365438 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-365438 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.231 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.27 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:21:25.682733  399410 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 18:21:25.682795  399410 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 18:21:25.727890  399410 cri.go:89] found id: "786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111"
	I0916 18:21:25.727920  399410 cri.go:89] found id: "255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84"
	I0916 18:21:25.727926  399410 cri.go:89] found id: "d474ffcda9ea3b9dc66adad51f554d9ed54f7fe63b316cbe96c266b7311dc7e3"
	I0916 18:21:25.727931  399410 cri.go:89] found id: "ed95f724866d36c42ba065d18ace22308ee70c657fa2620f1fdcb326cc86b448"
	I0916 18:21:25.727935  399410 cri.go:89] found id: "8ade28da627b4f5198c66ae0f18cf962764bda43c0f4ceedcd43dcea8b1921c2"
	I0916 18:21:25.727940  399410 cri.go:89] found id: "8b22111b2c0ccf4d655ef72908353612f16023abbe0fdc2799d83b3f51a516d9"
	I0916 18:21:25.727944  399410 cri.go:89] found id: "637415283f8f3e0f2d2d2068751117dd958bc9af733c9419cfaaa17c8ce5ff5d"
	I0916 18:21:25.727947  399410 cri.go:89] found id: "cc48bfbff79f18ec298cbe49232d3744b9aa33729dc3a9c2af0a246eea14db61"
	I0916 18:21:25.727951  399410 cri.go:89] found id: "ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d"
	I0916 18:21:25.727958  399410 cri.go:89] found id: "fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8"
	I0916 18:21:25.727963  399410 cri.go:89] found id: "bdc152e65d13deee23e6a4e930c2bec2459d518921848cb45cb502441c635803"
	I0916 18:21:25.727967  399410 cri.go:89] found id: "4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804"
	I0916 18:21:25.727971  399410 cri.go:89] found id: "c88b73102e4d24757efe1e0eeaadf76fa2e54a16ec03a7a0ef165237bb480486"
	I0916 18:21:25.727976  399410 cri.go:89] found id: "ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce"
	I0916 18:21:25.727984  399410 cri.go:89] found id: "36d26d8df5e6b7e994c2c217d9ecb457ee45b9943a18bf522aeb79707b144ba6"
	I0916 18:21:25.727989  399410 cri.go:89] found id: ""
	I0916 18:21:25.728045  399410 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.395945858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511191395915113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae9e50a6-ba45-425d-b1e4-8cee278b2b18 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.396656890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36d29a2c-33be-44b6-aeb3-88b5339f0132 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.396711640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36d29a2c-33be-44b6-aeb3-88b5339f0132 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.397155209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36d29a2c-33be-44b6-aeb3-88b5339f0132 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.447002602Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6349cf9c-a820-4111-99aa-c083cf2e3aa8 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.447131226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6349cf9c-a820-4111-99aa-c083cf2e3aa8 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.448275744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c982b55-8706-4221-afab-662adbb0a777 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.448971378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511191448939779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c982b55-8706-4221-afab-662adbb0a777 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.449659105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b832219b-883a-418d-8d08-6f00b8a1eff6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.449732489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b832219b-883a-418d-8d08-6f00b8a1eff6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.450107797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b832219b-883a-418d-8d08-6f00b8a1eff6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.502075256Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad49af1e-f56e-4d56-8749-2c55356ff7e7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.502847839Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad49af1e-f56e-4d56-8749-2c55356ff7e7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.510985772Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fd3681e-fd0b-4b8f-88a8-033b7493cc1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.511404825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511191511384088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fd3681e-fd0b-4b8f-88a8-033b7493cc1a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.512047310Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f061c7a-61fa-40f9-955f-a463f88a5a30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.512122879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f061c7a-61fa-40f9-955f-a463f88a5a30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.512586736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f061c7a-61fa-40f9-955f-a463f88a5a30 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.559219001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5d0bc2af-c55c-4bb8-9cc5-3b5945424fa7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.559298545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5d0bc2af-c55c-4bb8-9cc5-3b5945424fa7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.560358919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af3d3dad-b276-4f0a-975e-4dd03731f48b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.561207877Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511191561180478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af3d3dad-b276-4f0a-975e-4dd03731f48b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.561966993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8aa9b496-e9e2-44ad-b00a-b1e4ef2399e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.562031581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8aa9b496-e9e2-44ad-b00a-b1e4ef2399e2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:26:31 ha-365438 crio[3810]: time="2024-09-16 18:26:31.562715176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:001573b7b963897c100f4246e9569a1858efde19de4871a516b2512c0e3190dc,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726510942240353228,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726510929223788182,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948266dfd539b20c86090e4152af37d79dd111c929846af3d2dd6be60beb2caa,PodSandboxId:cfdaad28f3f132471d9ebaf767e0b8896164754962a6a6162e1d9af660a8c49e,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726510923565846092,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726510922709625855,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a9f83ca4e24144c932389e2d7bfa9e5776346489c39f3827c0e05b2e59ab339,PodSandboxId:5123710beac110f0edad3459523d86d21131454ce66393f41f980a21094c691f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726510904217222378,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de135fa9f94332594cad8703eb446e7,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9,PodSandboxId:fc1d4a333ee93623d7e1ccb0882d0f38451405fadc3e465353fa9dbfdcab3e20,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894664510432,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71,PodSandboxId:a9c6610d86981e5223fcd8c324d940c98a5c733bf7d28016af919071c31eb213,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726510894640443945,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.kubernetes.container.hash:
2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6095c73ae01be7926284a767f3b8bad354b338cc9f4266eebf2417281e17ef6,PodSandboxId:0f90e064c7ae59e6747f091cc79075c0f4b0236259fd613b8d97b21ee73b1988,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726510890442951808,Labels:map[string]string{io.kubernetes.container.name: storage-provis
ioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e028ac1-4385-4d75-a80c-022a5bd90494,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74,PodSandboxId:f65201a81c37122ffb6f0c40b444efdd63d6ac36f487a476d232ec7dddef2a58,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726510890629151670,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122,PodSandboxId:39ddf209c6fc84e0b4a27f3085a906c3733d7f5e71572337f6c2cfee127e595f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726510890230591859,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65,PodSandboxId:5f38da9920a7cda5961b770be68ef886bff4fa4935eb1750875b33e1afcae703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726510890319311546,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0,PodSandboxId:8ed5e7ef07b860e2be4be25d60ccb7e158bfab0567b71abd9056c9ebe728ce34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726510890036547008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: c7b0ab34f4aee20f06faf7609d3e1205,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c,PodSandboxId:1dfcb43e6f37f44df91f850583f2447d5fed11441228bb05f0529f64d61a88d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726510890017648391,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotati
ons:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827,PodSandboxId:6d6354e7bb9520f9d941a3b71983f25cb9eb4640209e5441143a8e8747f9c682,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726510889948832900,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eba74aa50b0a68dd2cab9f3e21a77d6,},Ann
otations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111,PodSandboxId:c4de9c35b0bd6e117841af26d2cc6703911eb86ef84e015e1646430d61df3853,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510874013752965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zh7sm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06bf623-3365-4a96-9920-1732dbccb11e,},Annotations:map[string]string{io.ku
bernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84,PodSandboxId:2ada558a992d829fbde83f9065ebb68479bd6e91a90973b8ad1b4afd9fc23854,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726510873929584441,Labels:map[string]string{io.kubernetes.container.name: c
oredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9svk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d217bdc6-679b-4142-8b23-6b42ce62bed7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c688c47b509beb3e212332f9352c6624e3da6bad41f3ba98c777efed5faaaac,PodSandboxId:45427fea44b560f2bae72f642627ca9bc54f527eefcfb9b5baa804e3931495ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726510390752158691,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-8lxm5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 65d5a00f-1f34-4797-af18-9e71ca834a79,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d,PodSandboxId:16b1b97f4eee2bc46eb57bdf068bd08e26eaedbc27a022a5bdf8607394c49edb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726510232597363037,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-599gk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 707eec6e-e38e-440a-8c26-67e1cd5fb644,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8,PodSandboxId:c7bb352443d32f831f2d21aaa5625574bc69cafd6524dc837a655f14983f1eef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726510232322882036,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4rfbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe239922-db36-477f-9fe5-9635b598aae1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804,PodSandboxId:4415d47ee85c8b8f89c173c4d705f91f8fc6c81ad99bdab2a097298c0183f74b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe
954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726510221037045236,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb1947c92a1198b7f2706653997a7278,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce,PodSandboxId:265048ac4715e314d13f6bb32730eeb18a6acf1165e4851f39d24c215b8cd489,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTA
INER_EXITED,CreatedAt:1726510220989408400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-365438,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f73a2686ca3c9ae2e5b8e38bca6a1d1c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8aa9b496-e9e2-44ad-b00a-b1e4ef2399e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	001573b7b9638       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   0f90e064c7ae5       storage-provisioner
	62f143f8a5310       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   8ed5e7ef07b86       kube-apiserver-ha-365438
	948266dfd539b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   cfdaad28f3f13       busybox-7dff88458-8lxm5
	c1141e0bd181c       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   6d6354e7bb952       kube-controller-manager-ha-365438
	2a9f83ca4e241       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   5123710beac11       kube-vip-ha-365438
	c1340e7dafac3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   fc1d4a333ee93       coredns-7c65d6cfc9-9svk8
	214a069b1af9a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   2                   a9c6610d86981       coredns-7c65d6cfc9-zh7sm
	aa483917c6d08       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   f65201a81c371       kube-proxy-4rfbj
	c6095c73ae01b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   0f90e064c7ae5       storage-provisioner
	2ecc2680e8434       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   5f38da9920a7c       kube-scheduler-ha-365438
	a09ecff4ae95f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   39ddf209c6fc8       kindnet-599gk
	78553547a1583       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   8ed5e7ef07b86       kube-apiserver-ha-365438
	e636b3f9a2c87       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   1dfcb43e6f37f       etcd-ha-365438
	c128f1e941b11       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   6d6354e7bb952       kube-controller-manager-ha-365438
	786c916a75ad8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Exited              coredns                   1                   c4de9c35b0bd6       coredns-7c65d6cfc9-zh7sm
	255453aac7614       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Exited              coredns                   1                   2ada558a992d8       coredns-7c65d6cfc9-9svk8
	1c688c47b509b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   45427fea44b56       busybox-7dff88458-8lxm5
	ae842d37f79ef       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   16b1b97f4eee2       kindnet-599gk
	fced6ce81805e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   c7bb352443d32       kube-proxy-4rfbj
	4afcf5ad24d43       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      16 minutes ago      Exited              kube-scheduler            0                   4415d47ee85c8       kube-scheduler-ha-365438
	ee90a7de312ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   265048ac4715e       etcd-ha-365438
	
	
	==> coredns [214a069b1af9a5ad6afe76f632b8e8e144a8c53c7681ea77d5fd9a5606c67a71] <==
	Trace[459271206]: [10.001811232s] [10.001811232s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:49456->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:49456->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:49466->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:49466->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [255453aac7614673803ff0e6e030a56fa106ea3e73858652d792f4f8e5453d84] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46371 - 37218 "HINFO IN 3877162347587772276.3800904195356776012. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021824973s
	
	
	==> coredns [786c916a75ad81c7cd4b007fc9c4961ceabf1edb152c0fe8e5cf3aacd5c2b111] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46676 - 17519 "HINFO IN 4735384296121859020.7567965011438161124. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02453665s
	
	
	==> coredns [c1340e7dafac3e80d0e402852052ca26c0c9110eef35fafa1c4e6bda93d716e9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43646->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43646->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43694->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:43694->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-365438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T18_10_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:10:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:26:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:22:15 +0000   Mon, 16 Sep 2024 18:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    ha-365438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 428a6b3869674553b5fa368f548d44fe
	  System UUID:                428a6b38-6967-4553-b5fa-368f548d44fe
	  Boot ID:                    bf6a145c-4c83-434e-832f-5377ceb5d93e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8lxm5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-9svk8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-zh7sm             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-365438                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-599gk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-365438             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-365438    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-4rfbj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-365438             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-365438                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m37s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 4m18s                 kube-proxy       
	  Normal   Starting                 15m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)     kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)     kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)     kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                   kubelet          Node ha-365438 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                   kubelet          Node ha-365438 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m                   kubelet          Node ha-365438 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   NodeReady                15m                   kubelet          Node ha-365438 status is now: NodeReady
	  Normal   RegisteredNode           15m                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   RegisteredNode           13m                   node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Warning  ContainerGCFailed        6m5s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m6s (x4 over 6m20s)  kubelet          Node ha-365438 status is now: NodeNotReady
	  Normal   RegisteredNode           4m22s                 node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   RegisteredNode           4m18s                 node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	  Normal   RegisteredNode           3m15s                 node-controller  Node ha-365438 event: Registered Node ha-365438 in Controller
	
	
	Name:               ha-365438-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_11_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:11:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:26:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:22:57 +0000   Mon, 16 Sep 2024 18:22:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    ha-365438-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37dacc83603e40abb19ac133e9d2c030
	  System UUID:                37dacc83-603e-40ab-b19a-c133e9d2c030
	  Boot ID:                    5efe6112-356d-484c-ab2c-4ab05a97dc5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8whmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-365438-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-q2vlq                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-365438-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-365438-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-nrqvf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-365438-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-365438-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-365438-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-365438-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-365438-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-365438-m02 status is now: NodeNotReady
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node ha-365438-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node ha-365438-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-365438-m02 event: Registered Node ha-365438-m02 in Controller
	
	
	Name:               ha-365438-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-365438-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=ha-365438
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_13_45_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:13:44 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-365438-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:24:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 18:23:44 +0000   Mon, 16 Sep 2024 18:24:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-365438-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 19a60d15c35e49c89cf5c86d6e9e7127
	  System UUID:                19a60d15-c35e-49c8-9cf5-c86d6e9e7127
	  Boot ID:                    cd6712a5-a899-461f-b6b0-981c66ae101a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9w42p    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-gjxct              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-pln82           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-365438-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-365438-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-365438-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-365438-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m22s                  node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Normal   RegisteredNode           3m15s                  node-controller  Node ha-365438-m04 event: Registered Node ha-365438-m04 in Controller
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-365438-m04 has been rebooted, boot id: cd6712a5-a899-461f-b6b0-981c66ae101a
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-365438-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-365438-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-365438-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s                  kubelet          Node ha-365438-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m42s)   node-controller  Node ha-365438-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058371] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073590] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.211872] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.138357] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.296917] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.171159] systemd-fstab-generator[750]: Ignoring "noauto" option for root device
	[  +4.216894] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.069713] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.331842] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.083288] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.257891] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.535666] kauditd_printk_skb: 38 callbacks suppressed
	[Sep16 18:11] kauditd_printk_skb: 26 callbacks suppressed
	[Sep16 18:18] kauditd_printk_skb: 1 callbacks suppressed
	[Sep16 18:21] systemd-fstab-generator[3533]: Ignoring "noauto" option for root device
	[  +0.161082] systemd-fstab-generator[3545]: Ignoring "noauto" option for root device
	[  +0.321192] systemd-fstab-generator[3650]: Ignoring "noauto" option for root device
	[  +0.220861] systemd-fstab-generator[3718]: Ignoring "noauto" option for root device
	[  +0.357855] systemd-fstab-generator[3802]: Ignoring "noauto" option for root device
	[ +10.572463] systemd-fstab-generator[3945]: Ignoring "noauto" option for root device
	[  +0.091284] kauditd_printk_skb: 120 callbacks suppressed
	[  +5.000394] kauditd_printk_skb: 55 callbacks suppressed
	[ +11.783601] kauditd_printk_skb: 46 callbacks suppressed
	[ +10.069815] kauditd_printk_skb: 1 callbacks suppressed
	[Sep16 18:22] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [e636b3f9a2c8734264fbcc86249255b577d6314245bcff1f91535d686f2b157c] <==
	{"level":"warn","ts":"2024-09-16T18:23:09.774642Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"280a274dd8bdbcec","error":"Get \"https://192.168.39.231:2380/version\": dial tcp 192.168.39.231:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T18:23:10.159116Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.159228Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.162913Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.178695Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"280a274dd8bdbcec","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T18:23:10.178849Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:10.190182Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"ffc3b7517aaad9f6","to":"280a274dd8bdbcec","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T18:23:10.190315Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:21.703360Z","caller":"traceutil/trace.go:171","msg":"trace[1028167089] transaction","detail":"{read_only:false; response_revision:2479; number_of_response:1; }","duration":"121.996314ms","start":"2024-09-16T18:23:21.581339Z","end":"2024-09-16T18:23:21.703336Z","steps":["trace[1028167089] 'process raft request'  (duration: 118.773665ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T18:23:58.189563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 switched to configuration voters=(2784534752242365981 18429775660708452854)"}
	{"level":"info","ts":"2024-09-16T18:23:58.192753Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","removed-remote-peer-id":"280a274dd8bdbcec","removed-remote-peer-urls":["https://192.168.39.231:2380"]}
	{"level":"info","ts":"2024-09-16T18:23:58.192890Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"warn","ts":"2024-09-16T18:23:58.193586Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:58.193722Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"warn","ts":"2024-09-16T18:23:58.194125Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:58.194218Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:58.194304Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"warn","ts":"2024-09-16T18:23:58.194695Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T18:23:58.194783Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"280a274dd8bdbcec","error":"failed to read 280a274dd8bdbcec on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T18:23:58.194845Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"warn","ts":"2024-09-16T18:23:58.195108Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec","error":"context canceled"}
	{"level":"info","ts":"2024-09-16T18:23:58.195176Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:58.195246Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:23:58.195309Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"ffc3b7517aaad9f6","removed-remote-peer-id":"280a274dd8bdbcec"}
	{"level":"warn","ts":"2024-09-16T18:23:58.217198Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"ffc3b7517aaad9f6","remote-peer-id-stream-handler":"ffc3b7517aaad9f6","remote-peer-id-from":"280a274dd8bdbcec"}
	
	
	==> etcd [ee90a7de312ff58aaf9192211785cb41bc719e60eb590bb0e7b8e1584013e6ce] <==
	{"level":"warn","ts":"2024-09-16T18:19:41.827320Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:19:41.073812Z","time spent":"753.500545ms","remote":"127.0.0.1:35348","response type":"/etcdserverpb.KV/Range","request count":0,"request size":95,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" limit:10000 "}
	2024/09/16 18:19:41 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T18:19:41.892638Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:19:41.893016Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.165:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T18:19:41.894281Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"ffc3b7517aaad9f6","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-16T18:19:41.894528Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894611Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894686Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894825Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894900Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.894990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.895023Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"26a4a79aa422f61d"}
	{"level":"info","ts":"2024-09-16T18:19:41.895047Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895094Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895134Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895234Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895309Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895377Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"ffc3b7517aaad9f6","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.895421Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"280a274dd8bdbcec"}
	{"level":"info","ts":"2024-09-16T18:19:41.899011Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"warn","ts":"2024-09-16T18:19:41.899032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.8488868s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-16T18:19:41.899171Z","caller":"traceutil/trace.go:171","msg":"trace[679369831] range","detail":"{range_begin:; range_end:; }","duration":"8.849034852s","start":"2024-09-16T18:19:33.050122Z","end":"2024-09-16T18:19:41.899157Z","steps":["trace[679369831] 'agreement among raft nodes before linearized reading'  (duration: 8.848885634s)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T18:19:41.899223Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-09-16T18:19:41.899253Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-365438","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"]}
	{"level":"error","ts":"2024-09-16T18:19:41.899294Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	
	
	==> kernel <==
	 18:26:32 up 16 min,  0 users,  load average: 0.21, 0.41, 0.31
	Linux ha-365438 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a09ecff4ae95fe65b14d4fa6e4e61ed6d8b6dd5e0e6bdd197d84fd534237b122] <==
	I0916 18:25:51.490647       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:26:01.489953       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:26:01.490143       1 main.go:299] handling current node
	I0916 18:26:01.490183       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:26:01.490202       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:26:01.490392       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:26:01.490421       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:26:11.498569       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:26:11.498604       1 main.go:299] handling current node
	I0916 18:26:11.498621       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:26:11.498625       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:26:11.498770       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:26:11.498798       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:26:21.499154       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:26:21.499337       1 main.go:299] handling current node
	I0916 18:26:21.499376       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:26:21.499396       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:26:21.499613       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:26:21.499648       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:26:31.490587       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:26:31.490641       1 main.go:299] handling current node
	I0916 18:26:31.490657       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:26:31.490663       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:26:31.490858       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:26:31.490864       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ae842d37f79ef475a9981ba0f869d70cf8bdb547b28a25dc58a7d11d95be142d] <==
	I0916 18:19:03.849323       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:13.857726       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:19:13.857806       1 main.go:299] handling current node
	I0916 18:19:13.857855       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:19:13.857862       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:19:13.858013       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:19:13.858041       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:19:13.858097       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:19:13.858120       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:23.850414       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:19:23.850527       1 main.go:299] handling current node
	I0916 18:19:23.850566       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:19:23.850575       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:19:23.850775       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:19:23.850805       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	I0916 18:19:23.850864       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:19:23.850887       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:33.848775       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0916 18:19:33.848898       1 main.go:322] Node ha-365438-m04 has CIDR [10.244.3.0/24] 
	I0916 18:19:33.849093       1 main.go:295] Handling node with IPs: map[192.168.39.165:{}]
	I0916 18:19:33.849119       1 main.go:299] handling current node
	I0916 18:19:33.849141       1 main.go:295] Handling node with IPs: map[192.168.39.18:{}]
	I0916 18:19:33.849158       1 main.go:322] Node ha-365438-m02 has CIDR [10.244.1.0/24] 
	I0916 18:19:33.849221       1 main.go:295] Handling node with IPs: map[192.168.39.231:{}]
	I0916 18:19:33.849239       1 main.go:322] Node ha-365438-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [62f143f8a5310b7d2a748b7336aaf4d60ce488c3c739b5d920fb29f86dff6847] <==
	I0916 18:22:11.181195       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 18:22:11.181314       1 policy_source.go:224] refreshing policies
	I0916 18:22:11.200969       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 18:22:11.201362       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 18:22:11.201687       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 18:22:11.202014       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 18:22:11.203626       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 18:22:11.203861       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 18:22:11.203880       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 18:22:11.205711       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 18:22:11.211718       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0916 18:22:11.224670       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.18 192.168.39.231]
	I0916 18:22:11.226101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 18:22:11.235577       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 18:22:11.235893       1 aggregator.go:171] initial CRD sync complete...
	I0916 18:22:11.236158       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 18:22:11.236186       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 18:22:11.236195       1 cache.go:39] Caches are synced for autoregister controller
	I0916 18:22:11.239304       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 18:22:11.247992       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 18:22:11.273729       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 18:22:12.108190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 18:22:12.564644       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.18 192.168.39.231]
	W0916 18:22:22.563948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.18]
	W0916 18:24:12.572315       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.165 192.168.39.18]
	
	
	==> kube-apiserver [78553547a15836d0cf5085d0001f03ba4ac1b75961dda1c0026fa1a65e2684a0] <==
	I0916 18:21:30.562192       1 options.go:228] external host was not specified, using 192.168.39.165
	I0916 18:21:30.579189       1 server.go:142] Version: v1.31.1
	I0916 18:21:30.579267       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:21:31.197949       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0916 18:21:31.208068       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 18:21:31.211938       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0916 18:21:31.211960       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0916 18:21:31.212191       1 instance.go:232] Using reconciler: lease
	W0916 18:21:51.197801       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 18:21:51.197800       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0916 18:21:51.213822       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0916 18:21:51.213898       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [c1141e0bd181cc7d10bd4181b1a7c7179ec321f083345761502b1284bd7d6118] <==
	I0916 18:23:55.061026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.250585ms"
	I0916 18:23:55.064183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.844µs"
	I0916 18:23:56.971550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="157.22µs"
	I0916 18:23:57.172987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="85.27µs"
	I0916 18:23:57.181574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="133.231µs"
	I0916 18:23:59.265287       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.577199ms"
	I0916 18:23:59.265427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.525µs"
	I0916 18:24:09.171616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m03"
	I0916 18:24:09.171823       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-365438-m04"
	E0916 18:24:14.283943       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:14.283997       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:14.284005       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:14.284010       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:14.284015       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:34.284571       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:34.284636       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:34.284643       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:34.284657       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	E0916 18:24:34.284663       1 gc_controller.go:151] "Failed to get node" err="node \"ha-365438-m03\" not found" logger="pod-garbage-collector-controller" node="ha-365438-m03"
	I0916 18:24:45.999332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:24:46.019803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:24:46.082779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.7309ms"
	I0916 18:24:46.082891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="41.97µs"
	I0916 18:24:49.433511       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	I0916 18:24:51.096266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-365438-m04"
	
	
	==> kube-controller-manager [c128f1e941b117be0dde658c2119101704cfb71893713f5269dfadf0c1062827] <==
	I0916 18:21:31.403314       1 serving.go:386] Generated self-signed cert in-memory
	I0916 18:21:31.764335       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 18:21:31.764375       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:21:31.765956       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 18:21:31.766093       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 18:21:31.766247       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 18:21:31.766404       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0916 18:21:52.221289       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.165:8443/healthz\": dial tcp 192.168.39.165:8443: connect: connection refused"
	
	
	==> kube-proxy [aa483917c6d08721b07638829b13e788c0ae706e43f901d5e86fedf9570cbe74] <==
	E0916 18:21:34.896069       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:37.969418       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:41.040907       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:47.184957       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0916 18:21:56.400189       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-365438\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0916 18:22:13.356450       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.165"]
	E0916 18:22:13.356755       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:22:13.438161       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:22:13.438322       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:22:13.438381       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:22:13.441495       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:22:13.442005       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:22:13.442239       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:22:13.444707       1 config.go:199] "Starting service config controller"
	I0916 18:22:13.444811       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:22:13.444901       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:22:13.444932       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:22:13.446205       1 config.go:328] "Starting node config controller"
	I0916 18:22:13.446294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:22:13.545979       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 18:22:13.546009       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:22:13.546547       1 shared_informer.go:320] Caches are synced for node config
	W0916 18:24:57.586803       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0916 18:24:57.586922       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0916 18:24:57.587042       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [fced6ce81805ef1050bf2e3d8facfa13f7402900a0142f75f22739359b21bcc8] <==
	E0916 18:18:26.096016       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:26.096125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:26.096225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:33.647870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:33.649099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:33.648124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	W0916 18:18:33.649217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:33.649191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E0916 18:18:33.649274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:42.864741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:42.864910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:42.865000       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:42.865122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:18:45.937931       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:18:45.938181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:04.369742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:04.369868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:07.439946       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:07.440078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-365438&resourceVersion=1868\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:07.440007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:07.440652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:38.161714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:38.161805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1881\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0916 18:19:41.231907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878": dial tcp 192.168.39.254:8443: connect: no route to host
	E0916 18:19:41.232006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1878\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [2ecc2680e8434caca77a02f3c9addf2ae0c9aeb3f466320f446bd81f15dfbd65] <==
	W0916 18:22:08.151635       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:08.151707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.165:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:08.151636       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.165:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:08.151781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.165:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:09.122423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:09.122568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.165:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:09.377690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.165:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:09.378294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.165:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:09.386149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.165:8443: connect: connection refused
	E0916 18:22:09.386242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.165:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.165:8443: connect: connection refused" logger="UnhandledError"
	W0916 18:22:11.145165       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 18:22:11.145238       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 18:22:11.189012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 18:22:11.189071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:22:11.189202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 18:22:11.189233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:22:11.189289       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 18:22:11.189321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:22:11.189767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 18:22:11.193038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 18:22:34.031969       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 18:23:54.905757       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9w42p\": pod busybox-7dff88458-9w42p is already assigned to node \"ha-365438-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-9w42p" node="ha-365438-m04"
	E0916 18:23:54.905895       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4e59db57-5860-4fe4-86f4-d89d7528190f(default/busybox-7dff88458-9w42p) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-9w42p"
	E0916 18:23:54.905953       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-9w42p\": pod busybox-7dff88458-9w42p is already assigned to node \"ha-365438-m04\"" pod="default/busybox-7dff88458-9w42p"
	I0916 18:23:54.906012       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-9w42p" node="ha-365438-m04"
	
	
	==> kube-scheduler [4afcf5ad24d43037f2c3eeb625c69389e8f6d45882cb136ff95ef1b983d48804] <==
	I0916 18:12:38.217215       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-99gkn" node="ha-365438-m03"
	I0916 18:13:06.562653       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="f2ef0616-2379-49c3-af53-b3779fb4448f" pod="default/busybox-7dff88458-4hs24" assumedNode="ha-365438-m03" currentNode="ha-365438-m02"
	E0916 18:13:06.587442       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-4hs24" node="ha-365438-m02"
	E0916 18:13:06.587523       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f2ef0616-2379-49c3-af53-b3779fb4448f(default/busybox-7dff88458-4hs24) was assumed on ha-365438-m02 but assigned to ha-365438-m03" pod="default/busybox-7dff88458-4hs24"
	E0916 18:13:06.587555       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-4hs24\": pod busybox-7dff88458-4hs24 is already assigned to node \"ha-365438-m03\"" pod="default/busybox-7dff88458-4hs24"
	I0916 18:13:06.587578       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-4hs24" node="ha-365438-m03"
	E0916 18:13:06.618090       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	E0916 18:13:06.618528       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 11bd1f64-d695-4fc7-bec9-5694a7552fdf(default/busybox-7dff88458-8whmx) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-8whmx"
	E0916 18:13:06.618607       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8whmx\": pod busybox-7dff88458-8whmx is already assigned to node \"ha-365438-m02\"" pod="default/busybox-7dff88458-8whmx"
	I0916 18:13:06.618663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-8whmx" node="ha-365438-m02"
	E0916 18:19:32.387108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 18:19:33.449916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0916 18:19:33.482561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 18:19:35.618140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0916 18:19:38.512116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0916 18:19:38.595642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 18:19:38.787364       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0916 18:19:39.377753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0916 18:19:40.181265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0916 18:19:40.411295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 18:19:41.343359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	I0916 18:19:41.804892       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0916 18:19:41.805144       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0916 18:19:41.805398       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 18:19:41.805408       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 16 18:25:17 ha-365438 kubelet[1307]: E0916 18:25:17.456046    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511117454711542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:27 ha-365438 kubelet[1307]: E0916 18:25:27.251337    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:25:27 ha-365438 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:25:27 ha-365438 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:25:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:25:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:25:27 ha-365438 kubelet[1307]: E0916 18:25:27.458366    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511127457961694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:27 ha-365438 kubelet[1307]: E0916 18:25:27.458414    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511127457961694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:37 ha-365438 kubelet[1307]: E0916 18:25:37.460913    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511137460341089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:37 ha-365438 kubelet[1307]: E0916 18:25:37.460972    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511137460341089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:47 ha-365438 kubelet[1307]: E0916 18:25:47.462661    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511147462178235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:47 ha-365438 kubelet[1307]: E0916 18:25:47.463056    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511147462178235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:57 ha-365438 kubelet[1307]: E0916 18:25:57.464717    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511157464285316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:25:57 ha-365438 kubelet[1307]: E0916 18:25:57.464798    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511157464285316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:26:07 ha-365438 kubelet[1307]: E0916 18:26:07.467014    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511167466562833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:26:07 ha-365438 kubelet[1307]: E0916 18:26:07.467060    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511167466562833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:26:17 ha-365438 kubelet[1307]: E0916 18:26:17.469557    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511177469125095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:26:17 ha-365438 kubelet[1307]: E0916 18:26:17.469584    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511177469125095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:26:27 ha-365438 kubelet[1307]: E0916 18:26:27.250347    1307 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:26:27 ha-365438 kubelet[1307]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:26:27 ha-365438 kubelet[1307]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:26:27 ha-365438 kubelet[1307]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:26:27 ha-365438 kubelet[1307]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:26:27 ha-365438 kubelet[1307]: E0916 18:26:27.471638    1307 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511187471155569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:26:27 ha-365438 kubelet[1307]: E0916 18:26:27.471685    1307 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726511187471155569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 18:26:31.104232  401816 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19649-371203/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-365438 -n ha-365438
helpers_test.go:261: (dbg) Run:  kubectl --context ha-365438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-588591
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-588591
E0916 18:43:56.985049  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-588591: exit status 82 (2m1.929911148s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-588591-m03"  ...
	* Stopping node "multinode-588591-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-588591" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-588591 --wait=true -v=8 --alsologtostderr
E0916 18:47:00.052256  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-588591 --wait=true -v=8 --alsologtostderr: (3m19.09152448s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-588591
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-588591 -n multinode-588591
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-588591 logs -n 25: (1.520438134s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3158127858/001/cp-test_multinode-588591-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591:/home/docker/cp-test_multinode-588591-m02_multinode-588591.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591 sudo cat                                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m02_multinode-588591.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03:/home/docker/cp-test_multinode-588591-m02_multinode-588591-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591-m03 sudo cat                                   | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m02_multinode-588591-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp testdata/cp-test.txt                                                | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3158127858/001/cp-test_multinode-588591-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591:/home/docker/cp-test_multinode-588591-m03_multinode-588591.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591 sudo cat                                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m03_multinode-588591.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02:/home/docker/cp-test_multinode-588591-m03_multinode-588591-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591-m02 sudo cat                                   | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m03_multinode-588591-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-588591 node stop m03                                                          | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	| node    | multinode-588591 node start                                                             | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-588591                                                                | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:42 UTC |                     |
	| stop    | -p multinode-588591                                                                     | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:42 UTC |                     |
	| start   | -p multinode-588591                                                                     | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:44 UTC | 16 Sep 24 18:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-588591                                                                | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 18:44:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 18:44:13.863503  411348 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:44:13.863633  411348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:44:13.863642  411348 out.go:358] Setting ErrFile to fd 2...
	I0916 18:44:13.863647  411348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:44:13.863855  411348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:44:13.864413  411348 out.go:352] Setting JSON to false
	I0916 18:44:13.865399  411348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8797,"bootTime":1726503457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:44:13.865509  411348 start.go:139] virtualization: kvm guest
	I0916 18:44:13.868254  411348 out.go:177] * [multinode-588591] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:44:13.870128  411348 notify.go:220] Checking for updates...
	I0916 18:44:13.870157  411348 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:44:13.872014  411348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:44:13.873728  411348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:44:13.875487  411348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:44:13.877107  411348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:44:13.878582  411348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:44:13.880584  411348 config.go:182] Loaded profile config "multinode-588591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:44:13.880714  411348 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:44:13.881374  411348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:44:13.881448  411348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:44:13.896615  411348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0916 18:44:13.897171  411348 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:44:13.897782  411348 main.go:141] libmachine: Using API Version  1
	I0916 18:44:13.897823  411348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:44:13.898194  411348 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:44:13.898378  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:44:13.934576  411348 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 18:44:13.936011  411348 start.go:297] selected driver: kvm2
	I0916 18:44:13.936028  411348 start.go:901] validating driver "kvm2" against &{Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:44:13.936191  411348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:44:13.936517  411348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:44:13.936589  411348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 18:44:13.952024  411348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 18:44:13.952705  411348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:44:13.952751  411348 cni.go:84] Creating CNI manager for ""
	I0916 18:44:13.952821  411348 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 18:44:13.952905  411348 start.go:340] cluster config:
	{Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-588591 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:44:13.953097  411348 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:44:13.955038  411348 out.go:177] * Starting "multinode-588591" primary control-plane node in "multinode-588591" cluster
	I0916 18:44:13.956796  411348 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:44:13.956841  411348 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 18:44:13.956854  411348 cache.go:56] Caching tarball of preloaded images
	I0916 18:44:13.956961  411348 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:44:13.956974  411348 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:44:13.957118  411348 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/config.json ...
	I0916 18:44:13.957337  411348 start.go:360] acquireMachinesLock for multinode-588591: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:44:13.957393  411348 start.go:364] duration metric: took 34.341µs to acquireMachinesLock for "multinode-588591"
	I0916 18:44:13.957412  411348 start.go:96] Skipping create...Using existing machine configuration
	I0916 18:44:13.957420  411348 fix.go:54] fixHost starting: 
	I0916 18:44:13.957690  411348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:44:13.957761  411348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:44:13.972726  411348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0916 18:44:13.973294  411348 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:44:13.973868  411348 main.go:141] libmachine: Using API Version  1
	I0916 18:44:13.973902  411348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:44:13.974201  411348 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:44:13.974401  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:44:13.974543  411348 main.go:141] libmachine: (multinode-588591) Calling .GetState
	I0916 18:44:13.976212  411348 fix.go:112] recreateIfNeeded on multinode-588591: state=Running err=<nil>
	W0916 18:44:13.976237  411348 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 18:44:13.978516  411348 out.go:177] * Updating the running kvm2 "multinode-588591" VM ...
	I0916 18:44:13.980071  411348 machine.go:93] provisionDockerMachine start ...
	I0916 18:44:13.980094  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:44:13.980310  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:13.982608  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:13.983040  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:13.983064  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:13.983253  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:13.983429  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:13.983603  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:13.983767  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:13.983973  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:13.984241  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:13.984262  411348 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 18:44:14.090399  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-588591
	
	I0916 18:44:14.090439  411348 main.go:141] libmachine: (multinode-588591) Calling .GetMachineName
	I0916 18:44:14.090684  411348 buildroot.go:166] provisioning hostname "multinode-588591"
	I0916 18:44:14.090711  411348 main.go:141] libmachine: (multinode-588591) Calling .GetMachineName
	I0916 18:44:14.090996  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.093763  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.094211  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.094280  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.094399  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.094601  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.094767  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.094903  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.095164  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:14.095330  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:14.095342  411348 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-588591 && echo "multinode-588591" | sudo tee /etc/hostname
	I0916 18:44:14.216124  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-588591
	
	I0916 18:44:14.216156  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.219121  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.219481  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.219506  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.219764  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.219984  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.220214  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.220450  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.220683  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:14.220876  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:14.220893  411348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-588591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-588591/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-588591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:44:14.326224  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:44:14.326263  411348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:44:14.326290  411348 buildroot.go:174] setting up certificates
	I0916 18:44:14.326302  411348 provision.go:84] configureAuth start
	I0916 18:44:14.326311  411348 main.go:141] libmachine: (multinode-588591) Calling .GetMachineName
	I0916 18:44:14.326629  411348 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:44:14.329598  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.330051  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.330074  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.330217  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.332198  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.332516  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.332552  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.332673  411348 provision.go:143] copyHostCerts
	I0916 18:44:14.332712  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:44:14.332749  411348 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:44:14.332761  411348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:44:14.332841  411348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:44:14.332977  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:44:14.333001  411348 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:44:14.333009  411348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:44:14.333050  411348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:44:14.333129  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:44:14.333152  411348 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:44:14.333160  411348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:44:14.333192  411348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:44:14.333296  411348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.multinode-588591 san=[127.0.0.1 192.168.39.90 localhost minikube multinode-588591]
	I0916 18:44:14.455816  411348 provision.go:177] copyRemoteCerts
	I0916 18:44:14.455890  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:44:14.455916  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.459199  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.459589  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.459620  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.459823  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.460036  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.460223  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.460452  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:44:14.543759  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:44:14.543834  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:44:14.569381  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:44:14.569484  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 18:44:14.596488  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:44:14.596568  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 18:44:14.624074  411348 provision.go:87] duration metric: took 297.756485ms to configureAuth
	I0916 18:44:14.624110  411348 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:44:14.624337  411348 config.go:182] Loaded profile config "multinode-588591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:44:14.624414  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.627254  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.627665  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.627696  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.627916  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.628095  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.628247  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.628353  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.628544  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:14.628759  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:14.628774  411348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:45:45.337930  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:45:45.337979  411348 machine.go:96] duration metric: took 1m31.357894451s to provisionDockerMachine
	I0916 18:45:45.337994  411348 start.go:293] postStartSetup for "multinode-588591" (driver="kvm2")
	I0916 18:45:45.338018  411348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:45:45.338044  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.338430  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:45:45.338464  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.341618  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.342117  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.342142  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.342295  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.342496  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.342713  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.342901  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:45:45.425086  411348 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:45:45.429680  411348 command_runner.go:130] > NAME=Buildroot
	I0916 18:45:45.429703  411348 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 18:45:45.429707  411348 command_runner.go:130] > ID=buildroot
	I0916 18:45:45.429712  411348 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 18:45:45.429717  411348 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 18:45:45.429791  411348 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:45:45.429815  411348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:45:45.429880  411348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:45:45.429982  411348 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:45:45.429995  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:45:45.430089  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:45:45.440460  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:45:45.465659  411348 start.go:296] duration metric: took 127.648709ms for postStartSetup
	I0916 18:45:45.465705  411348 fix.go:56] duration metric: took 1m31.508285808s for fixHost
	I0916 18:45:45.465728  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.468638  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.469041  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.469067  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.469237  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.469434  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.469586  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.469742  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.469931  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:45:45.470115  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:45:45.470126  411348 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:45:45.574106  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726512345.540426972
	
	I0916 18:45:45.574134  411348 fix.go:216] guest clock: 1726512345.540426972
	I0916 18:45:45.574144  411348 fix.go:229] Guest: 2024-09-16 18:45:45.540426972 +0000 UTC Remote: 2024-09-16 18:45:45.465709078 +0000 UTC m=+91.640325179 (delta=74.717894ms)
	I0916 18:45:45.574192  411348 fix.go:200] guest clock delta is within tolerance: 74.717894ms
	I0916 18:45:45.574199  411348 start.go:83] releasing machines lock for "multinode-588591", held for 1m31.616794864s
	I0916 18:45:45.574226  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.574508  411348 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:45:45.577580  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.578029  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.578077  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.578240  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.578861  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.579027  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.579103  411348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:45:45.579172  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.579249  411348 ssh_runner.go:195] Run: cat /version.json
	I0916 18:45:45.579273  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.581967  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.582366  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.582397  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.582483  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.582546  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.582719  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.582865  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.582974  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.582999  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:45:45.583015  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.583179  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.583353  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.583530  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.583684  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:45:45.677915  411348 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 18:45:45.678456  411348 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0916 18:45:45.678630  411348 ssh_runner.go:195] Run: systemctl --version
	I0916 18:45:45.685140  411348 command_runner.go:130] > systemd 252 (252)
	I0916 18:45:45.685184  411348 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 18:45:45.685260  411348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:45:45.851793  411348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 18:45:45.859540  411348 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 18:45:45.860041  411348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:45:45.860101  411348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:45:45.869780  411348 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 18:45:45.869817  411348 start.go:495] detecting cgroup driver to use...
	I0916 18:45:45.869881  411348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:45:45.886267  411348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:45:45.900551  411348 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:45:45.900608  411348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:45:45.915265  411348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:45:45.929411  411348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:45:46.077710  411348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:45:46.219053  411348 docker.go:233] disabling docker service ...
	I0916 18:45:46.219125  411348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:45:46.236265  411348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:45:46.250695  411348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:45:46.415489  411348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:45:46.557701  411348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:45:46.573526  411348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:45:46.592765  411348 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 18:45:46.593083  411348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:45:46.593141  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.604628  411348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:45:46.604702  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.616703  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.628344  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.639621  411348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:45:46.651632  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.662818  411348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.674145  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.686227  411348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:45:46.695619  411348 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 18:45:46.695700  411348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:45:46.705190  411348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:45:46.839353  411348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:45:47.048132  411348 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:45:47.048211  411348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:45:47.053611  411348 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 18:45:47.053646  411348 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 18:45:47.053656  411348 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0916 18:45:47.053673  411348 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 18:45:47.053679  411348 command_runner.go:130] > Access: 2024-09-16 18:45:46.897025857 +0000
	I0916 18:45:47.053685  411348 command_runner.go:130] > Modify: 2024-09-16 18:45:46.897025857 +0000
	I0916 18:45:47.053690  411348 command_runner.go:130] > Change: 2024-09-16 18:45:46.897025857 +0000
	I0916 18:45:47.053694  411348 command_runner.go:130] >  Birth: -
	I0916 18:45:47.053739  411348 start.go:563] Will wait 60s for crictl version
	I0916 18:45:47.053783  411348 ssh_runner.go:195] Run: which crictl
	I0916 18:45:47.057935  411348 command_runner.go:130] > /usr/bin/crictl
	I0916 18:45:47.058020  411348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:45:47.102737  411348 command_runner.go:130] > Version:  0.1.0
	I0916 18:45:47.102772  411348 command_runner.go:130] > RuntimeName:  cri-o
	I0916 18:45:47.102785  411348 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 18:45:47.102793  411348 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 18:45:47.103966  411348 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:45:47.104052  411348 ssh_runner.go:195] Run: crio --version
	I0916 18:45:47.133658  411348 command_runner.go:130] > crio version 1.29.1
	I0916 18:45:47.133689  411348 command_runner.go:130] > Version:        1.29.1
	I0916 18:45:47.133699  411348 command_runner.go:130] > GitCommit:      unknown
	I0916 18:45:47.133705  411348 command_runner.go:130] > GitCommitDate:  unknown
	I0916 18:45:47.133711  411348 command_runner.go:130] > GitTreeState:   clean
	I0916 18:45:47.133719  411348 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0916 18:45:47.133727  411348 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 18:45:47.133732  411348 command_runner.go:130] > Compiler:       gc
	I0916 18:45:47.133739  411348 command_runner.go:130] > Platform:       linux/amd64
	I0916 18:45:47.133746  411348 command_runner.go:130] > Linkmode:       dynamic
	I0916 18:45:47.133753  411348 command_runner.go:130] > BuildTags:      
	I0916 18:45:47.133760  411348 command_runner.go:130] >   containers_image_ostree_stub
	I0916 18:45:47.133767  411348 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 18:45:47.133773  411348 command_runner.go:130] >   btrfs_noversion
	I0916 18:45:47.133781  411348 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 18:45:47.133788  411348 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 18:45:47.133794  411348 command_runner.go:130] >   seccomp
	I0916 18:45:47.133803  411348 command_runner.go:130] > LDFlags:          unknown
	I0916 18:45:47.133809  411348 command_runner.go:130] > SeccompEnabled:   true
	I0916 18:45:47.133817  411348 command_runner.go:130] > AppArmorEnabled:  false
	I0916 18:45:47.134888  411348 ssh_runner.go:195] Run: crio --version
	I0916 18:45:47.162864  411348 command_runner.go:130] > crio version 1.29.1
	I0916 18:45:47.162895  411348 command_runner.go:130] > Version:        1.29.1
	I0916 18:45:47.162904  411348 command_runner.go:130] > GitCommit:      unknown
	I0916 18:45:47.162911  411348 command_runner.go:130] > GitCommitDate:  unknown
	I0916 18:45:47.162917  411348 command_runner.go:130] > GitTreeState:   clean
	I0916 18:45:47.162935  411348 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0916 18:45:47.162943  411348 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 18:45:47.162950  411348 command_runner.go:130] > Compiler:       gc
	I0916 18:45:47.162957  411348 command_runner.go:130] > Platform:       linux/amd64
	I0916 18:45:47.162964  411348 command_runner.go:130] > Linkmode:       dynamic
	I0916 18:45:47.162975  411348 command_runner.go:130] > BuildTags:      
	I0916 18:45:47.162985  411348 command_runner.go:130] >   containers_image_ostree_stub
	I0916 18:45:47.162993  411348 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 18:45:47.163000  411348 command_runner.go:130] >   btrfs_noversion
	I0916 18:45:47.163010  411348 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 18:45:47.163018  411348 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 18:45:47.163023  411348 command_runner.go:130] >   seccomp
	I0916 18:45:47.163031  411348 command_runner.go:130] > LDFlags:          unknown
	I0916 18:45:47.163039  411348 command_runner.go:130] > SeccompEnabled:   true
	I0916 18:45:47.163049  411348 command_runner.go:130] > AppArmorEnabled:  false
	I0916 18:45:47.166577  411348 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:45:47.168242  411348 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:45:47.171075  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:47.171499  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:47.171521  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:47.171780  411348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:45:47.176144  411348 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 18:45:47.176238  411348 kubeadm.go:883] updating cluster {Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 18:45:47.176364  411348 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:45:47.176403  411348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:45:47.219665  411348 command_runner.go:130] > {
	I0916 18:45:47.219688  411348 command_runner.go:130] >   "images": [
	I0916 18:45:47.219692  411348 command_runner.go:130] >     {
	I0916 18:45:47.219701  411348 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 18:45:47.219706  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219713  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 18:45:47.219716  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219720  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219729  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 18:45:47.219736  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 18:45:47.219740  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219744  411348 command_runner.go:130] >       "size": "87190579",
	I0916 18:45:47.219747  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.219751  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.219758  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.219762  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.219765  411348 command_runner.go:130] >     },
	I0916 18:45:47.219768  411348 command_runner.go:130] >     {
	I0916 18:45:47.219774  411348 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 18:45:47.219778  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219783  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 18:45:47.219787  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219791  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219797  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 18:45:47.219810  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 18:45:47.219814  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219819  411348 command_runner.go:130] >       "size": "1363676",
	I0916 18:45:47.219823  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.219837  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.219843  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.219847  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.219852  411348 command_runner.go:130] >     },
	I0916 18:45:47.219855  411348 command_runner.go:130] >     {
	I0916 18:45:47.219861  411348 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 18:45:47.219867  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219872  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 18:45:47.219877  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219881  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219889  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 18:45:47.219898  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 18:45:47.219902  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219906  411348 command_runner.go:130] >       "size": "31470524",
	I0916 18:45:47.219910  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.219914  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.219920  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.219924  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.219928  411348 command_runner.go:130] >     },
	I0916 18:45:47.219932  411348 command_runner.go:130] >     {
	I0916 18:45:47.219938  411348 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 18:45:47.219944  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219949  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 18:45:47.219954  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219958  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219966  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 18:45:47.219980  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 18:45:47.219985  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219989  411348 command_runner.go:130] >       "size": "63273227",
	I0916 18:45:47.220000  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.220007  411348 command_runner.go:130] >       "username": "nonroot",
	I0916 18:45:47.220011  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220017  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220021  411348 command_runner.go:130] >     },
	I0916 18:45:47.220026  411348 command_runner.go:130] >     {
	I0916 18:45:47.220034  411348 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 18:45:47.220040  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220045  411348 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 18:45:47.220051  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220055  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220064  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 18:45:47.220073  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 18:45:47.220078  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220082  411348 command_runner.go:130] >       "size": "149009664",
	I0916 18:45:47.220086  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220091  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220094  411348 command_runner.go:130] >       },
	I0916 18:45:47.220100  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220105  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220110  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220114  411348 command_runner.go:130] >     },
	I0916 18:45:47.220120  411348 command_runner.go:130] >     {
	I0916 18:45:47.220125  411348 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 18:45:47.220131  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220136  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 18:45:47.220142  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220145  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220154  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 18:45:47.220163  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 18:45:47.220169  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220173  411348 command_runner.go:130] >       "size": "95237600",
	I0916 18:45:47.220182  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220194  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220200  411348 command_runner.go:130] >       },
	I0916 18:45:47.220204  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220210  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220214  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220220  411348 command_runner.go:130] >     },
	I0916 18:45:47.220223  411348 command_runner.go:130] >     {
	I0916 18:45:47.220231  411348 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 18:45:47.220235  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220240  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 18:45:47.220246  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220250  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220259  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 18:45:47.220266  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 18:45:47.220271  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220275  411348 command_runner.go:130] >       "size": "89437508",
	I0916 18:45:47.220284  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220288  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220295  411348 command_runner.go:130] >       },
	I0916 18:45:47.220301  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220305  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220311  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220315  411348 command_runner.go:130] >     },
	I0916 18:45:47.220320  411348 command_runner.go:130] >     {
	I0916 18:45:47.220326  411348 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 18:45:47.220333  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220341  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 18:45:47.220344  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220351  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220365  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 18:45:47.220374  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 18:45:47.220380  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220385  411348 command_runner.go:130] >       "size": "92733849",
	I0916 18:45:47.220391  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.220395  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220399  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220403  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220406  411348 command_runner.go:130] >     },
	I0916 18:45:47.220409  411348 command_runner.go:130] >     {
	I0916 18:45:47.220415  411348 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 18:45:47.220418  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220423  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 18:45:47.220427  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220430  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220437  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 18:45:47.220444  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 18:45:47.220447  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220451  411348 command_runner.go:130] >       "size": "68420934",
	I0916 18:45:47.220454  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220457  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220460  411348 command_runner.go:130] >       },
	I0916 18:45:47.220464  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220467  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220471  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220474  411348 command_runner.go:130] >     },
	I0916 18:45:47.220477  411348 command_runner.go:130] >     {
	I0916 18:45:47.220483  411348 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 18:45:47.220488  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220492  411348 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 18:45:47.220496  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220501  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220509  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 18:45:47.220516  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 18:45:47.220522  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220526  411348 command_runner.go:130] >       "size": "742080",
	I0916 18:45:47.220529  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220534  411348 command_runner.go:130] >         "value": "65535"
	I0916 18:45:47.220539  411348 command_runner.go:130] >       },
	I0916 18:45:47.220543  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220548  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220552  411348 command_runner.go:130] >       "pinned": true
	I0916 18:45:47.220558  411348 command_runner.go:130] >     }
	I0916 18:45:47.220563  411348 command_runner.go:130] >   ]
	I0916 18:45:47.220567  411348 command_runner.go:130] > }
	I0916 18:45:47.221094  411348 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:45:47.221118  411348 crio.go:433] Images already preloaded, skipping extraction
	I0916 18:45:47.221181  411348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:45:47.254888  411348 command_runner.go:130] > {
	I0916 18:45:47.254920  411348 command_runner.go:130] >   "images": [
	I0916 18:45:47.254927  411348 command_runner.go:130] >     {
	I0916 18:45:47.254940  411348 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 18:45:47.254948  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.254975  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 18:45:47.254979  411348 command_runner.go:130] >       ],
	I0916 18:45:47.254984  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.254992  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 18:45:47.255000  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 18:45:47.255003  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255008  411348 command_runner.go:130] >       "size": "87190579",
	I0916 18:45:47.255013  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255017  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255031  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255037  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255041  411348 command_runner.go:130] >     },
	I0916 18:45:47.255044  411348 command_runner.go:130] >     {
	I0916 18:45:47.255050  411348 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 18:45:47.255056  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255061  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 18:45:47.255066  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255070  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255079  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 18:45:47.255086  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 18:45:47.255092  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255095  411348 command_runner.go:130] >       "size": "1363676",
	I0916 18:45:47.255100  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255107  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255113  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255117  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255121  411348 command_runner.go:130] >     },
	I0916 18:45:47.255124  411348 command_runner.go:130] >     {
	I0916 18:45:47.255132  411348 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 18:45:47.255137  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255143  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 18:45:47.255147  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255151  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255160  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 18:45:47.255170  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 18:45:47.255173  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255177  411348 command_runner.go:130] >       "size": "31470524",
	I0916 18:45:47.255181  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255185  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255189  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255192  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255195  411348 command_runner.go:130] >     },
	I0916 18:45:47.255199  411348 command_runner.go:130] >     {
	I0916 18:45:47.255205  411348 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 18:45:47.255212  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255219  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 18:45:47.255222  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255226  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255233  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 18:45:47.255248  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 18:45:47.255254  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255259  411348 command_runner.go:130] >       "size": "63273227",
	I0916 18:45:47.255265  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255272  411348 command_runner.go:130] >       "username": "nonroot",
	I0916 18:45:47.255278  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255282  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255288  411348 command_runner.go:130] >     },
	I0916 18:45:47.255290  411348 command_runner.go:130] >     {
	I0916 18:45:47.255297  411348 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 18:45:47.255302  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255307  411348 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 18:45:47.255311  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255314  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255321  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 18:45:47.255330  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 18:45:47.255333  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255338  411348 command_runner.go:130] >       "size": "149009664",
	I0916 18:45:47.255343  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255347  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255369  411348 command_runner.go:130] >       },
	I0916 18:45:47.255373  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255377  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255382  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255387  411348 command_runner.go:130] >     },
	I0916 18:45:47.255393  411348 command_runner.go:130] >     {
	I0916 18:45:47.255401  411348 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 18:45:47.255406  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255411  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 18:45:47.255415  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255420  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255431  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 18:45:47.255438  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 18:45:47.255444  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255448  411348 command_runner.go:130] >       "size": "95237600",
	I0916 18:45:47.255452  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255456  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255459  411348 command_runner.go:130] >       },
	I0916 18:45:47.255463  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255467  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255473  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255476  411348 command_runner.go:130] >     },
	I0916 18:45:47.255480  411348 command_runner.go:130] >     {
	I0916 18:45:47.255487  411348 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 18:45:47.255491  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255499  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 18:45:47.255507  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255513  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255527  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 18:45:47.255542  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 18:45:47.255557  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255565  411348 command_runner.go:130] >       "size": "89437508",
	I0916 18:45:47.255569  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255575  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255578  411348 command_runner.go:130] >       },
	I0916 18:45:47.255582  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255586  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255590  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255594  411348 command_runner.go:130] >     },
	I0916 18:45:47.255597  411348 command_runner.go:130] >     {
	I0916 18:45:47.255605  411348 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 18:45:47.255611  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255616  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 18:45:47.255622  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255625  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255639  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 18:45:47.255648  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 18:45:47.255652  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255656  411348 command_runner.go:130] >       "size": "92733849",
	I0916 18:45:47.255661  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255665  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255671  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255676  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255680  411348 command_runner.go:130] >     },
	I0916 18:45:47.255684  411348 command_runner.go:130] >     {
	I0916 18:45:47.255690  411348 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 18:45:47.255695  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255700  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 18:45:47.255704  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255708  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255719  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 18:45:47.255732  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 18:45:47.255743  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255752  411348 command_runner.go:130] >       "size": "68420934",
	I0916 18:45:47.255762  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255768  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255776  411348 command_runner.go:130] >       },
	I0916 18:45:47.255783  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255792  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255799  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255807  411348 command_runner.go:130] >     },
	I0916 18:45:47.255813  411348 command_runner.go:130] >     {
	I0916 18:45:47.255820  411348 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 18:45:47.255824  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255829  411348 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 18:45:47.255832  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255836  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255843  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 18:45:47.255855  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 18:45:47.255861  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255865  411348 command_runner.go:130] >       "size": "742080",
	I0916 18:45:47.255869  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255873  411348 command_runner.go:130] >         "value": "65535"
	I0916 18:45:47.255876  411348 command_runner.go:130] >       },
	I0916 18:45:47.255881  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255884  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255888  411348 command_runner.go:130] >       "pinned": true
	I0916 18:45:47.255892  411348 command_runner.go:130] >     }
	I0916 18:45:47.255895  411348 command_runner.go:130] >   ]
	I0916 18:45:47.255899  411348 command_runner.go:130] > }
	I0916 18:45:47.256640  411348 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:45:47.256661  411348 cache_images.go:84] Images are preloaded, skipping loading
	I0916 18:45:47.256670  411348 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0916 18:45:47.256811  411348 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-588591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:45:47.256903  411348 ssh_runner.go:195] Run: crio config
	I0916 18:45:47.300586  411348 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 18:45:47.300624  411348 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 18:45:47.300636  411348 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 18:45:47.300640  411348 command_runner.go:130] > #
	I0916 18:45:47.300647  411348 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 18:45:47.300653  411348 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 18:45:47.300671  411348 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 18:45:47.300678  411348 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 18:45:47.300681  411348 command_runner.go:130] > # reload'.
	I0916 18:45:47.300687  411348 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 18:45:47.300697  411348 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 18:45:47.300706  411348 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 18:45:47.300716  411348 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 18:45:47.300722  411348 command_runner.go:130] > [crio]
	I0916 18:45:47.300731  411348 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 18:45:47.300741  411348 command_runner.go:130] > # containers images, in this directory.
	I0916 18:45:47.300748  411348 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 18:45:47.300766  411348 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 18:45:47.300777  411348 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 18:45:47.300789  411348 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 18:45:47.300800  411348 command_runner.go:130] > # imagestore = ""
	I0916 18:45:47.300808  411348 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 18:45:47.300815  411348 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 18:45:47.300820  411348 command_runner.go:130] > storage_driver = "overlay"
	I0916 18:45:47.300828  411348 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 18:45:47.300833  411348 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 18:45:47.300838  411348 command_runner.go:130] > storage_option = [
	I0916 18:45:47.300844  411348 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 18:45:47.300953  411348 command_runner.go:130] > ]
	I0916 18:45:47.300973  411348 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 18:45:47.300980  411348 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 18:45:47.301168  411348 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 18:45:47.301189  411348 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 18:45:47.301200  411348 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 18:45:47.301211  411348 command_runner.go:130] > # always happen on a node reboot
	I0916 18:45:47.301578  411348 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 18:45:47.301603  411348 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 18:45:47.301610  411348 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 18:45:47.301615  411348 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 18:45:47.301706  411348 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 18:45:47.301729  411348 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 18:45:47.301743  411348 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 18:45:47.301972  411348 command_runner.go:130] > # internal_wipe = true
	I0916 18:45:47.301986  411348 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 18:45:47.301992  411348 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 18:45:47.302306  411348 command_runner.go:130] > # internal_repair = false
	I0916 18:45:47.302329  411348 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 18:45:47.302341  411348 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 18:45:47.302352  411348 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 18:45:47.302557  411348 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 18:45:47.302579  411348 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 18:45:47.302585  411348 command_runner.go:130] > [crio.api]
	I0916 18:45:47.302593  411348 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 18:45:47.302866  411348 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 18:45:47.302884  411348 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 18:45:47.303097  411348 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 18:45:47.303118  411348 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 18:45:47.303127  411348 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 18:45:47.303345  411348 command_runner.go:130] > # stream_port = "0"
	I0916 18:45:47.303357  411348 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 18:45:47.303551  411348 command_runner.go:130] > # stream_enable_tls = false
	I0916 18:45:47.303561  411348 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 18:45:47.303895  411348 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 18:45:47.303908  411348 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 18:45:47.303922  411348 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 18:45:47.303931  411348 command_runner.go:130] > # minutes.
	I0916 18:45:47.304108  411348 command_runner.go:130] > # stream_tls_cert = ""
	I0916 18:45:47.304126  411348 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 18:45:47.304135  411348 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 18:45:47.304292  411348 command_runner.go:130] > # stream_tls_key = ""
	I0916 18:45:47.304308  411348 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 18:45:47.304314  411348 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 18:45:47.304331  411348 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 18:45:47.304657  411348 command_runner.go:130] > # stream_tls_ca = ""
	I0916 18:45:47.304672  411348 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 18:45:47.304878  411348 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 18:45:47.304892  411348 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 18:45:47.305046  411348 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 18:45:47.305059  411348 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 18:45:47.305065  411348 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 18:45:47.305069  411348 command_runner.go:130] > [crio.runtime]
	I0916 18:45:47.305074  411348 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 18:45:47.305079  411348 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 18:45:47.305086  411348 command_runner.go:130] > # "nofile=1024:2048"
	I0916 18:45:47.305092  411348 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 18:45:47.305204  411348 command_runner.go:130] > # default_ulimits = [
	I0916 18:45:47.305618  411348 command_runner.go:130] > # ]
	I0916 18:45:47.305637  411348 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 18:45:47.306027  411348 command_runner.go:130] > # no_pivot = false
	I0916 18:45:47.306046  411348 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 18:45:47.306055  411348 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 18:45:47.306194  411348 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 18:45:47.306227  411348 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 18:45:47.306235  411348 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 18:45:47.306246  411348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 18:45:47.306253  411348 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 18:45:47.306259  411348 command_runner.go:130] > # Cgroup setting for conmon
	I0916 18:45:47.306269  411348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 18:45:47.306281  411348 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 18:45:47.306295  411348 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 18:45:47.306306  411348 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 18:45:47.306319  411348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 18:45:47.306327  411348 command_runner.go:130] > conmon_env = [
	I0916 18:45:47.306378  411348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 18:45:47.306427  411348 command_runner.go:130] > ]
	I0916 18:45:47.306445  411348 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 18:45:47.306457  411348 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 18:45:47.306469  411348 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 18:45:47.306551  411348 command_runner.go:130] > # default_env = [
	I0916 18:45:47.306701  411348 command_runner.go:130] > # ]
	I0916 18:45:47.306717  411348 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 18:45:47.306728  411348 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 18:45:47.307067  411348 command_runner.go:130] > # selinux = false
	I0916 18:45:47.307087  411348 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 18:45:47.307097  411348 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 18:45:47.307105  411348 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 18:45:47.307109  411348 command_runner.go:130] > # seccomp_profile = ""
	I0916 18:45:47.307114  411348 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 18:45:47.307120  411348 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 18:45:47.307132  411348 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 18:45:47.307137  411348 command_runner.go:130] > # which might increase security.
	I0916 18:45:47.307143  411348 command_runner.go:130] > # This option is currently deprecated,
	I0916 18:45:47.307149  411348 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 18:45:47.307157  411348 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 18:45:47.307164  411348 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 18:45:47.307172  411348 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 18:45:47.307178  411348 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 18:45:47.307185  411348 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 18:45:47.307192  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.307205  411348 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 18:45:47.307214  411348 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 18:45:47.307219  411348 command_runner.go:130] > # the cgroup blockio controller.
	I0916 18:45:47.307228  411348 command_runner.go:130] > # blockio_config_file = ""
	I0916 18:45:47.307239  411348 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 18:45:47.307248  411348 command_runner.go:130] > # blockio parameters.
	I0916 18:45:47.307254  411348 command_runner.go:130] > # blockio_reload = false
	I0916 18:45:47.307266  411348 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 18:45:47.307276  411348 command_runner.go:130] > # irqbalance daemon.
	I0916 18:45:47.307288  411348 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 18:45:47.307297  411348 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 18:45:47.307311  411348 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 18:45:47.307322  411348 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 18:45:47.307343  411348 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 18:45:47.307353  411348 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 18:45:47.307358  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.307368  411348 command_runner.go:130] > # rdt_config_file = ""
	I0916 18:45:47.307380  411348 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 18:45:47.307386  411348 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 18:45:47.307412  411348 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 18:45:47.307422  411348 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 18:45:47.307433  411348 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 18:45:47.307442  411348 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 18:45:47.307450  411348 command_runner.go:130] > # will be added.
	I0916 18:45:47.307456  411348 command_runner.go:130] > # default_capabilities = [
	I0916 18:45:47.307460  411348 command_runner.go:130] > # 	"CHOWN",
	I0916 18:45:47.307464  411348 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 18:45:47.307468  411348 command_runner.go:130] > # 	"FSETID",
	I0916 18:45:47.307474  411348 command_runner.go:130] > # 	"FOWNER",
	I0916 18:45:47.307477  411348 command_runner.go:130] > # 	"SETGID",
	I0916 18:45:47.307481  411348 command_runner.go:130] > # 	"SETUID",
	I0916 18:45:47.307485  411348 command_runner.go:130] > # 	"SETPCAP",
	I0916 18:45:47.307489  411348 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 18:45:47.307494  411348 command_runner.go:130] > # 	"KILL",
	I0916 18:45:47.307497  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307504  411348 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 18:45:47.307513  411348 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 18:45:47.307518  411348 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 18:45:47.307524  411348 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 18:45:47.307533  411348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 18:45:47.307542  411348 command_runner.go:130] > default_sysctls = [
	I0916 18:45:47.307555  411348 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 18:45:47.307563  411348 command_runner.go:130] > ]
	I0916 18:45:47.307571  411348 command_runner.go:130] > # List of devices on the host that a
	I0916 18:45:47.307583  411348 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 18:45:47.307592  411348 command_runner.go:130] > # allowed_devices = [
	I0916 18:45:47.307598  411348 command_runner.go:130] > # 	"/dev/fuse",
	I0916 18:45:47.307607  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307614  411348 command_runner.go:130] > # List of additional devices. specified as
	I0916 18:45:47.307639  411348 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 18:45:47.307648  411348 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 18:45:47.307662  411348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 18:45:47.307670  411348 command_runner.go:130] > # additional_devices = [
	I0916 18:45:47.307680  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307688  411348 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 18:45:47.307697  411348 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 18:45:47.307705  411348 command_runner.go:130] > # 	"/etc/cdi",
	I0916 18:45:47.307715  411348 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 18:45:47.307720  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307734  411348 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 18:45:47.307744  411348 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 18:45:47.307754  411348 command_runner.go:130] > # Defaults to false.
	I0916 18:45:47.307762  411348 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 18:45:47.307774  411348 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 18:45:47.307786  411348 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 18:45:47.307793  411348 command_runner.go:130] > # hooks_dir = [
	I0916 18:45:47.307798  411348 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 18:45:47.307803  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307810  411348 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 18:45:47.307822  411348 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 18:45:47.307834  411348 command_runner.go:130] > # its default mounts from the following two files:
	I0916 18:45:47.307839  411348 command_runner.go:130] > #
	I0916 18:45:47.307853  411348 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 18:45:47.307866  411348 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 18:45:47.307878  411348 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 18:45:47.307917  411348 command_runner.go:130] > #
	I0916 18:45:47.307949  411348 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 18:45:47.307963  411348 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 18:45:47.307977  411348 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 18:45:47.307989  411348 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 18:45:47.307995  411348 command_runner.go:130] > #
	I0916 18:45:47.308001  411348 command_runner.go:130] > # default_mounts_file = ""
	I0916 18:45:47.308010  411348 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 18:45:47.308032  411348 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 18:45:47.308043  411348 command_runner.go:130] > pids_limit = 1024
	I0916 18:45:47.308052  411348 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 18:45:47.308065  411348 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 18:45:47.308075  411348 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 18:45:47.308090  411348 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 18:45:47.308100  411348 command_runner.go:130] > # log_size_max = -1
	I0916 18:45:47.308111  411348 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 18:45:47.308125  411348 command_runner.go:130] > # log_to_journald = false
	I0916 18:45:47.308138  411348 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 18:45:47.308146  411348 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 18:45:47.308154  411348 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 18:45:47.308164  411348 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 18:45:47.308173  411348 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 18:45:47.308179  411348 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 18:45:47.308185  411348 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 18:45:47.308193  411348 command_runner.go:130] > # read_only = false
	I0916 18:45:47.308206  411348 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 18:45:47.308218  411348 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 18:45:47.308229  411348 command_runner.go:130] > # live configuration reload.
	I0916 18:45:47.308235  411348 command_runner.go:130] > # log_level = "info"
	I0916 18:45:47.308250  411348 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 18:45:47.308261  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.308270  411348 command_runner.go:130] > # log_filter = ""
	I0916 18:45:47.308279  411348 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 18:45:47.308292  411348 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 18:45:47.308300  411348 command_runner.go:130] > # separated by comma.
	I0916 18:45:47.308315  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308324  411348 command_runner.go:130] > # uid_mappings = ""
	I0916 18:45:47.308334  411348 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 18:45:47.308350  411348 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 18:45:47.308357  411348 command_runner.go:130] > # separated by comma.
	I0916 18:45:47.308370  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308380  411348 command_runner.go:130] > # gid_mappings = ""
	I0916 18:45:47.308389  411348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 18:45:47.308401  411348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 18:45:47.308416  411348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 18:45:47.308434  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308444  411348 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 18:45:47.308455  411348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 18:45:47.308468  411348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 18:45:47.308480  411348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 18:45:47.308492  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308503  411348 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 18:45:47.308512  411348 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 18:45:47.308522  411348 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 18:45:47.308528  411348 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 18:45:47.308534  411348 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 18:45:47.308540  411348 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 18:45:47.308548  411348 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 18:45:47.308555  411348 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 18:45:47.308562  411348 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 18:45:47.308566  411348 command_runner.go:130] > drop_infra_ctr = false
	I0916 18:45:47.308579  411348 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 18:45:47.308591  411348 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 18:45:47.308603  411348 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 18:45:47.308614  411348 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 18:45:47.308629  411348 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 18:45:47.308641  411348 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 18:45:47.308654  411348 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 18:45:47.308663  411348 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 18:45:47.308669  411348 command_runner.go:130] > # shared_cpuset = ""
	I0916 18:45:47.308677  411348 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 18:45:47.308689  411348 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 18:45:47.308696  411348 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 18:45:47.308710  411348 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 18:45:47.308721  411348 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 18:45:47.308732  411348 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 18:45:47.308744  411348 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 18:45:47.308754  411348 command_runner.go:130] > # enable_criu_support = false
	I0916 18:45:47.308762  411348 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 18:45:47.308780  411348 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 18:45:47.308798  411348 command_runner.go:130] > # enable_pod_events = false
	I0916 18:45:47.308811  411348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 18:45:47.308824  411348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 18:45:47.308834  411348 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 18:45:47.308842  411348 command_runner.go:130] > # default_runtime = "runc"
	I0916 18:45:47.308852  411348 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 18:45:47.308876  411348 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 18:45:47.308896  411348 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 18:45:47.308910  411348 command_runner.go:130] > # creation as a file is not desired either.
	I0916 18:45:47.308935  411348 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 18:45:47.308947  411348 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 18:45:47.308955  411348 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 18:45:47.308963  411348 command_runner.go:130] > # ]
	I0916 18:45:47.308974  411348 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 18:45:47.308984  411348 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 18:45:47.308994  411348 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 18:45:47.309005  411348 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 18:45:47.309011  411348 command_runner.go:130] > #
	I0916 18:45:47.309020  411348 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 18:45:47.309030  411348 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 18:45:47.309068  411348 command_runner.go:130] > # runtime_type = "oci"
	I0916 18:45:47.309078  411348 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 18:45:47.309086  411348 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 18:45:47.309096  411348 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 18:45:47.309103  411348 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 18:45:47.309113  411348 command_runner.go:130] > # monitor_env = []
	I0916 18:45:47.309123  411348 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 18:45:47.309132  411348 command_runner.go:130] > # allowed_annotations = []
	I0916 18:45:47.309141  411348 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 18:45:47.309150  411348 command_runner.go:130] > # Where:
	I0916 18:45:47.309158  411348 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 18:45:47.309168  411348 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 18:45:47.309177  411348 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 18:45:47.309198  411348 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 18:45:47.309208  411348 command_runner.go:130] > #   in $PATH.
	I0916 18:45:47.309218  411348 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 18:45:47.309228  411348 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 18:45:47.309241  411348 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 18:45:47.309249  411348 command_runner.go:130] > #   state.
	I0916 18:45:47.309255  411348 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 18:45:47.309266  411348 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 18:45:47.309279  411348 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 18:45:47.309290  411348 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 18:45:47.309302  411348 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 18:45:47.309316  411348 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 18:45:47.309326  411348 command_runner.go:130] > #   The currently recognized values are:
	I0916 18:45:47.309336  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 18:45:47.309378  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 18:45:47.309390  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 18:45:47.309403  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 18:45:47.309417  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 18:45:47.309429  411348 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 18:45:47.309440  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 18:45:47.309454  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 18:45:47.309467  411348 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 18:45:47.309478  411348 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 18:45:47.309488  411348 command_runner.go:130] > #   deprecated option "conmon".
	I0916 18:45:47.309498  411348 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 18:45:47.309509  411348 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 18:45:47.309522  411348 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 18:45:47.309533  411348 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 18:45:47.309546  411348 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 18:45:47.309556  411348 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 18:45:47.309566  411348 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 18:45:47.309577  411348 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 18:45:47.309583  411348 command_runner.go:130] > #
	I0916 18:45:47.309600  411348 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 18:45:47.309609  411348 command_runner.go:130] > #
	I0916 18:45:47.309619  411348 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 18:45:47.309630  411348 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 18:45:47.309646  411348 command_runner.go:130] > #
	I0916 18:45:47.309656  411348 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 18:45:47.309675  411348 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 18:45:47.309683  411348 command_runner.go:130] > #
	I0916 18:45:47.309693  411348 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 18:45:47.309702  411348 command_runner.go:130] > # feature.
	I0916 18:45:47.309707  411348 command_runner.go:130] > #
	I0916 18:45:47.309720  411348 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 18:45:47.309732  411348 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 18:45:47.309743  411348 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 18:45:47.309753  411348 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 18:45:47.309765  411348 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 18:45:47.309773  411348 command_runner.go:130] > #
	I0916 18:45:47.309783  411348 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 18:45:47.309795  411348 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 18:45:47.309803  411348 command_runner.go:130] > #
	I0916 18:45:47.309811  411348 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 18:45:47.309820  411348 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 18:45:47.309827  411348 command_runner.go:130] > #
	I0916 18:45:47.309837  411348 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 18:45:47.309849  411348 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 18:45:47.309859  411348 command_runner.go:130] > # limitation.
	I0916 18:45:47.309866  411348 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 18:45:47.309877  411348 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 18:45:47.309884  411348 command_runner.go:130] > runtime_type = "oci"
	I0916 18:45:47.309892  411348 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 18:45:47.309900  411348 command_runner.go:130] > runtime_config_path = ""
	I0916 18:45:47.309908  411348 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 18:45:47.309918  411348 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 18:45:47.309932  411348 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 18:45:47.309941  411348 command_runner.go:130] > monitor_env = [
	I0916 18:45:47.309951  411348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 18:45:47.309960  411348 command_runner.go:130] > ]
	I0916 18:45:47.309967  411348 command_runner.go:130] > privileged_without_host_devices = false
	I0916 18:45:47.309980  411348 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 18:45:47.309991  411348 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 18:45:47.310004  411348 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 18:45:47.310015  411348 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 18:45:47.310029  411348 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 18:45:47.310041  411348 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 18:45:47.310061  411348 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 18:45:47.310077  411348 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 18:45:47.310090  411348 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 18:45:47.310106  411348 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 18:45:47.310114  411348 command_runner.go:130] > # Example:
	I0916 18:45:47.310123  411348 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 18:45:47.310134  411348 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 18:45:47.310141  411348 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 18:45:47.310153  411348 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 18:45:47.310162  411348 command_runner.go:130] > # cpuset = 0
	I0916 18:45:47.310170  411348 command_runner.go:130] > # cpushares = "0-1"
	I0916 18:45:47.310179  411348 command_runner.go:130] > # Where:
	I0916 18:45:47.310189  411348 command_runner.go:130] > # The workload name is workload-type.
	I0916 18:45:47.310202  411348 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 18:45:47.310213  411348 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 18:45:47.310225  411348 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 18:45:47.310240  411348 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 18:45:47.310253  411348 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 18:45:47.310267  411348 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 18:45:47.310280  411348 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 18:45:47.310290  411348 command_runner.go:130] > # Default value is set to true
	I0916 18:45:47.310299  411348 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 18:45:47.310308  411348 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 18:45:47.310318  411348 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 18:45:47.310329  411348 command_runner.go:130] > # Default value is set to 'false'
	I0916 18:45:47.310336  411348 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 18:45:47.310355  411348 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 18:45:47.310363  411348 command_runner.go:130] > #
	I0916 18:45:47.310372  411348 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 18:45:47.310384  411348 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 18:45:47.310396  411348 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 18:45:47.310402  411348 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 18:45:47.310410  411348 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 18:45:47.310415  411348 command_runner.go:130] > [crio.image]
	I0916 18:45:47.310424  411348 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 18:45:47.310432  411348 command_runner.go:130] > # default_transport = "docker://"
	I0916 18:45:47.310445  411348 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 18:45:47.310455  411348 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 18:45:47.310461  411348 command_runner.go:130] > # global_auth_file = ""
	I0916 18:45:47.310469  411348 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 18:45:47.310477  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.310484  411348 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 18:45:47.310490  411348 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 18:45:47.310499  411348 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 18:45:47.310507  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.310515  411348 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 18:45:47.310525  411348 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 18:45:47.310534  411348 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 18:45:47.310544  411348 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 18:45:47.310553  411348 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 18:45:47.310559  411348 command_runner.go:130] > # pause_command = "/pause"
	I0916 18:45:47.310568  411348 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 18:45:47.310574  411348 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 18:45:47.310581  411348 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 18:45:47.310594  411348 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 18:45:47.310610  411348 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 18:45:47.310623  411348 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 18:45:47.310639  411348 command_runner.go:130] > # pinned_images = [
	I0916 18:45:47.310647  411348 command_runner.go:130] > # ]
	I0916 18:45:47.310660  411348 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 18:45:47.310672  411348 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 18:45:47.310680  411348 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 18:45:47.310690  411348 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 18:45:47.310703  411348 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 18:45:47.310712  411348 command_runner.go:130] > # signature_policy = ""
	I0916 18:45:47.310725  411348 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 18:45:47.310738  411348 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 18:45:47.310752  411348 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 18:45:47.310765  411348 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 18:45:47.310774  411348 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 18:45:47.310784  411348 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 18:45:47.310800  411348 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 18:45:47.310814  411348 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 18:45:47.310826  411348 command_runner.go:130] > # changing them here.
	I0916 18:45:47.310835  411348 command_runner.go:130] > # insecure_registries = [
	I0916 18:45:47.310843  411348 command_runner.go:130] > # ]
	I0916 18:45:47.310854  411348 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 18:45:47.310862  411348 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 18:45:47.310867  411348 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 18:45:47.310877  411348 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 18:45:47.310887  411348 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 18:45:47.310897  411348 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 18:45:47.310906  411348 command_runner.go:130] > # CNI plugins.
	I0916 18:45:47.310916  411348 command_runner.go:130] > [crio.network]
	I0916 18:45:47.310928  411348 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 18:45:47.310939  411348 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 18:45:47.310949  411348 command_runner.go:130] > # cni_default_network = ""
	I0916 18:45:47.310959  411348 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 18:45:47.310969  411348 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 18:45:47.310982  411348 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 18:45:47.310994  411348 command_runner.go:130] > # plugin_dirs = [
	I0916 18:45:47.311003  411348 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 18:45:47.311011  411348 command_runner.go:130] > # ]
	I0916 18:45:47.311024  411348 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 18:45:47.311032  411348 command_runner.go:130] > [crio.metrics]
	I0916 18:45:47.311044  411348 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 18:45:47.311052  411348 command_runner.go:130] > enable_metrics = true
	I0916 18:45:47.311063  411348 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 18:45:47.311074  411348 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 18:45:47.311084  411348 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 18:45:47.311097  411348 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 18:45:47.311109  411348 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 18:45:47.311119  411348 command_runner.go:130] > # metrics_collectors = [
	I0916 18:45:47.311129  411348 command_runner.go:130] > # 	"operations",
	I0916 18:45:47.311139  411348 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 18:45:47.311147  411348 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 18:45:47.311152  411348 command_runner.go:130] > # 	"operations_errors",
	I0916 18:45:47.311162  411348 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 18:45:47.311172  411348 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 18:45:47.311179  411348 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 18:45:47.311190  411348 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 18:45:47.311200  411348 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 18:45:47.311209  411348 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 18:45:47.311218  411348 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 18:45:47.311229  411348 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 18:45:47.311239  411348 command_runner.go:130] > # 	"containers_oom_total",
	I0916 18:45:47.311246  411348 command_runner.go:130] > # 	"containers_oom",
	I0916 18:45:47.311252  411348 command_runner.go:130] > # 	"processes_defunct",
	I0916 18:45:47.311261  411348 command_runner.go:130] > # 	"operations_total",
	I0916 18:45:47.311272  411348 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 18:45:47.311283  411348 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 18:45:47.311294  411348 command_runner.go:130] > # 	"operations_errors_total",
	I0916 18:45:47.311304  411348 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 18:45:47.311314  411348 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 18:45:47.311324  411348 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 18:45:47.311334  411348 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 18:45:47.311345  411348 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 18:45:47.311354  411348 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 18:45:47.311365  411348 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 18:45:47.311376  411348 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 18:45:47.311384  411348 command_runner.go:130] > # ]
	I0916 18:45:47.311395  411348 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 18:45:47.311404  411348 command_runner.go:130] > # metrics_port = 9090
	I0916 18:45:47.311416  411348 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 18:45:47.311424  411348 command_runner.go:130] > # metrics_socket = ""
	I0916 18:45:47.311435  411348 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 18:45:47.311441  411348 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 18:45:47.311453  411348 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 18:45:47.311465  411348 command_runner.go:130] > # certificate on any modification event.
	I0916 18:45:47.311474  411348 command_runner.go:130] > # metrics_cert = ""
	I0916 18:45:47.311485  411348 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 18:45:47.311496  411348 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 18:45:47.311505  411348 command_runner.go:130] > # metrics_key = ""
	I0916 18:45:47.311517  411348 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 18:45:47.311527  411348 command_runner.go:130] > [crio.tracing]
	I0916 18:45:47.311536  411348 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 18:45:47.311544  411348 command_runner.go:130] > # enable_tracing = false
	I0916 18:45:47.311556  411348 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 18:45:47.311566  411348 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 18:45:47.311580  411348 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 18:45:47.311591  411348 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 18:45:47.311601  411348 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 18:45:47.311610  411348 command_runner.go:130] > [crio.nri]
	I0916 18:45:47.311619  411348 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 18:45:47.311629  411348 command_runner.go:130] > # enable_nri = false
	I0916 18:45:47.311638  411348 command_runner.go:130] > # NRI socket to listen on.
	I0916 18:45:47.311649  411348 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 18:45:47.311660  411348 command_runner.go:130] > # NRI plugin directory to use.
	I0916 18:45:47.311672  411348 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 18:45:47.311686  411348 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 18:45:47.311697  411348 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 18:45:47.311705  411348 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 18:45:47.311709  411348 command_runner.go:130] > # nri_disable_connections = false
	I0916 18:45:47.311719  411348 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 18:45:47.311729  411348 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 18:45:47.311738  411348 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 18:45:47.311748  411348 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 18:45:47.311759  411348 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 18:45:47.311767  411348 command_runner.go:130] > [crio.stats]
	I0916 18:45:47.311777  411348 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 18:45:47.311788  411348 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 18:45:47.311798  411348 command_runner.go:130] > # stats_collection_period = 0
	I0916 18:45:47.311825  411348 command_runner.go:130] ! time="2024-09-16 18:45:47.257996033Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 18:45:47.311846  411348 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 18:45:47.311938  411348 cni.go:84] Creating CNI manager for ""
	I0916 18:45:47.311951  411348 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 18:45:47.311961  411348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 18:45:47.311981  411348 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-588591 NodeName:multinode-588591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 18:45:47.312132  411348 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-588591"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 18:45:47.312200  411348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:45:47.323063  411348 command_runner.go:130] > kubeadm
	I0916 18:45:47.323093  411348 command_runner.go:130] > kubectl
	I0916 18:45:47.323098  411348 command_runner.go:130] > kubelet
	I0916 18:45:47.323193  411348 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 18:45:47.323258  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 18:45:47.336229  411348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 18:45:47.353738  411348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:45:47.372366  411348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 18:45:47.390868  411348 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0916 18:45:47.395261  411348 command_runner.go:130] > 192.168.39.90	control-plane.minikube.internal
	I0916 18:45:47.395345  411348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:45:47.534583  411348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:45:47.550550  411348 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591 for IP: 192.168.39.90
	I0916 18:45:47.550586  411348 certs.go:194] generating shared ca certs ...
	I0916 18:45:47.550609  411348 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:45:47.550781  411348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:45:47.550838  411348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:45:47.550852  411348 certs.go:256] generating profile certs ...
	I0916 18:45:47.550982  411348 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/client.key
	I0916 18:45:47.551076  411348 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.key.a0b9fd92
	I0916 18:45:47.551138  411348 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.key
	I0916 18:45:47.551154  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:45:47.551180  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:45:47.551198  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:45:47.551223  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:45:47.551242  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:45:47.551261  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:45:47.551280  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:45:47.551298  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:45:47.551432  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:45:47.551508  411348 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:45:47.551524  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:45:47.551559  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:45:47.551596  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:45:47.551629  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:45:47.551695  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:45:47.551741  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.551765  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.551786  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.552480  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:45:47.579482  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:45:47.606015  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:45:47.632548  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:45:47.659384  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 18:45:47.684505  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 18:45:47.709886  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:45:47.737319  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:45:47.762339  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:45:47.787960  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:45:47.814814  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:45:47.841593  411348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 18:45:47.858886  411348 ssh_runner.go:195] Run: openssl version
	I0916 18:45:47.865136  411348 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 18:45:47.865226  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:45:47.876423  411348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.881142  411348 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.881245  411348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.881309  411348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.887178  411348 command_runner.go:130] > b5213941
	I0916 18:45:47.887250  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:45:47.897299  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:45:47.908838  411348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.913414  411348 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.913447  411348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.913487  411348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.919376  411348 command_runner.go:130] > 51391683
	I0916 18:45:47.919480  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:45:47.929331  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:45:47.940126  411348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.945253  411348 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.945436  411348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.945491  411348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.951866  411348 command_runner.go:130] > 3ec20f2e
	I0916 18:45:47.952090  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:45:47.962372  411348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:45:47.967453  411348 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:45:47.967487  411348 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 18:45:47.967496  411348 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0916 18:45:47.967505  411348 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 18:45:47.967528  411348 command_runner.go:130] > Access: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967535  411348 command_runner.go:130] > Modify: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967543  411348 command_runner.go:130] > Change: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967550  411348 command_runner.go:130] >  Birth: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967616  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 18:45:47.973790  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.973876  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 18:45:47.979824  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.979897  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 18:45:47.985980  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.986208  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 18:45:47.992132  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.992198  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 18:45:47.997720  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.997789  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 18:45:48.003192  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:48.003362  411348 kubeadm.go:392] StartCluster: {Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:45:48.003490  411348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 18:45:48.003549  411348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 18:45:48.041315  411348 command_runner.go:130] > 5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9
	I0916 18:45:48.041347  411348 command_runner.go:130] > 536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da
	I0916 18:45:48.041353  411348 command_runner.go:130] > b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c
	I0916 18:45:48.041360  411348 command_runner.go:130] > 88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1
	I0916 18:45:48.041366  411348 command_runner.go:130] > 0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4
	I0916 18:45:48.041385  411348 command_runner.go:130] > dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1
	I0916 18:45:48.041390  411348 command_runner.go:130] > 8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9
	I0916 18:45:48.041409  411348 command_runner.go:130] > 6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f
	I0916 18:45:48.043263  411348 cri.go:89] found id: "5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9"
	I0916 18:45:48.043284  411348 cri.go:89] found id: "536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da"
	I0916 18:45:48.043289  411348 cri.go:89] found id: "b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c"
	I0916 18:45:48.043295  411348 cri.go:89] found id: "88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1"
	I0916 18:45:48.043298  411348 cri.go:89] found id: "0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4"
	I0916 18:45:48.043301  411348 cri.go:89] found id: "dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1"
	I0916 18:45:48.043304  411348 cri.go:89] found id: "8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9"
	I0916 18:45:48.043307  411348 cri.go:89] found id: "6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f"
	I0916 18:45:48.043309  411348 cri.go:89] found id: ""
	I0916 18:45:48.043379  411348 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.636710861Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2875b0f2-dced-4485-b22b-8d56544e1f52 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.639077636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=888472e1-5b1b-4d49-acb0-bb2180e2efec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.639454242Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512453639433823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=888472e1-5b1b-4d49-acb0-bb2180e2efec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.640365470Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f1827daa-94f7-45c3-aa46-7062bfcd194b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.640784974Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-npxwd,Uid:7ae52a62-68b2-4df8-9a32-7c101e32fc1f,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512387705875403,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:45:53.550200392Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-jl97q,Uid:c1ecace7-ec89-48df-ba67-9d4db464f114,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1726512353921423119,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:45:53.550201613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&PodSandboxMetadata{Name:kindnet-pcwtq,Uid:302842c7-44d7-4798-8bd8-bffb298e5ae5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512353895470439,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-09-16T18:45:53.550206585Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:302e9fd7-e7c3-4885-8081-870d67fa9113,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512353891978549,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{
\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T18:45:53.550205120Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&PodSandboxMetadata{Name:kube-proxy-n6hld,Uid:1d0b45a9-faa7-42f6-92b7-6d4f80895cac,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512353889850194,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,k8s-app: kube-proxy,pod-templ
ate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:45:53.550198602Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-588591,Uid:0f71fd39f8b903bb242fba68909eb6d5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512350060103969,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f71fd39f8b903bb242fba68909eb6d5,kubernetes.io/config.seen: 2024-09-16T18:45:49.557271986Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&PodSandboxMetadat
a{Name:kube-scheduler-multinode-588591,Uid:896dbad86dc3607e987f75e8d5fb8b6d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512350056703402,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 896dbad86dc3607e987f75e8d5fb8b6d,kubernetes.io/config.seen: 2024-09-16T18:45:49.557272914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-588591,Uid:f0c277e362a7aacc4d6c4b0acce86d8c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512350054128780,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-5
88591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.90:8443,kubernetes.io/config.hash: f0c277e362a7aacc4d6c4b0acce86d8c,kubernetes.io/config.seen: 2024-09-16T18:45:49.557270740Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&PodSandboxMetadata{Name:etcd-multinode-588591,Uid:086ece32f2ae0140cf85d2dbbad4a779,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726512350051995933,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.90:2379,kubernete
s.io/config.hash: 086ece32f2ae0140cf85d2dbbad4a779,kubernetes.io/config.seen: 2024-09-16T18:45:49.557267177Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-npxwd,Uid:7ae52a62-68b2-4df8-9a32-7c101e32fc1f,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726512022123056445,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:40:21.807947491Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:302e9fd7-e7c3-4885-8081-870d67fa9113,Namespace:kube-system,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1726511965900386093,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\"
:\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-16T18:39:25.582927582Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-jl97q,Uid:c1ecace7-ec89-48df-ba67-9d4db464f114,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511965869639920,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:39:25.563174635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&PodSandboxMetadata{Name:kube-proxy-n6hld,Uid:1d0b45a9-faa7-42f6-92b7-6d4f80895cac,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511954377639703,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:39:12.271332004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&PodSandboxMetadata{Name:kindnet-pcwtq,Uid:302842c7-44d7-4798-8bd8-bffb298e5ae5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511954116445638,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,k8s-app: kindnet,pod-t
emplate-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-16T18:39:12.310283547Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-588591,Uid:896dbad86dc3607e987f75e8d5fb8b6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511941572246097,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 896dbad86dc3607e987f75e8d5fb8b6d,kubernetes.io/config.seen: 2024-09-16T18:39:01.103118979Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&PodSandboxMetadata{Name:et
cd-multinode-588591,Uid:086ece32f2ae0140cf85d2dbbad4a779,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511941569630702,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.90:2379,kubernetes.io/config.hash: 086ece32f2ae0140cf85d2dbbad4a779,kubernetes.io/config.seen: 2024-09-16T18:39:01.103112068Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-588591,Uid:0f71fd39f8b903bb242fba68909eb6d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511941563368167,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.containe
r.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f71fd39f8b903bb242fba68909eb6d5,kubernetes.io/config.seen: 2024-09-16T18:39:01.103118053Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-588591,Uid:f0c277e362a7aacc4d6c4b0acce86d8c,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726511941562497432,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 1
92.168.39.90:8443,kubernetes.io/config.hash: f0c277e362a7aacc4d6c4b0acce86d8c,kubernetes.io/config.seen: 2024-09-16T18:39:01.103116714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f1827daa-94f7-45c3-aa46-7062bfcd194b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.641241122Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3851002f-86fe-4e7e-a4d1-51d48e034a95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.641305905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3851002f-86fe-4e7e-a4d1-51d48e034a95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.641762998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3851002f-86fe-4e7e-a4d1-51d48e034a95 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.642184741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e222d0b6-8c01-427a-9d21-5d9ab7f4c7f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.642225475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e222d0b6-8c01-427a-9d21-5d9ab7f4c7f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.642656607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e222d0b6-8c01-427a-9d21-5d9ab7f4c7f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.686419943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22f46f84-0512-481f-b946-7151edc2e5cb name=/runtime.v1.RuntimeService/Version
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.686565067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22f46f84-0512-481f-b946-7151edc2e5cb name=/runtime.v1.RuntimeService/Version
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.687796288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59518037-91f5-4c06-8d16-2c7f286c1495 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.688207463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512453688183971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59518037-91f5-4c06-8d16-2c7f286c1495 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.688680394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5318ca2d-57a9-4fab-a7d4-bd582fb895ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.688844555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5318ca2d-57a9-4fab-a7d4-bd582fb895ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.689198615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5318ca2d-57a9-4fab-a7d4-bd582fb895ae name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.731942798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4795ff79-1d7b-4df5-af30-7d9c9d85fdfd name=/runtime.v1.RuntimeService/Version
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.732033958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4795ff79-1d7b-4df5-af30-7d9c9d85fdfd name=/runtime.v1.RuntimeService/Version
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.733408253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dde71c43-cf25-40b9-aea9-3a9c22c510dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.733899094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512453733875161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dde71c43-cf25-40b9-aea9-3a9c22c510dc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.734477830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e45c005-385c-4c14-92e5-fa99d56e529a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.734605409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e45c005-385c-4c14-92e5-fa99d56e529a name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:47:33 multinode-588591 crio[2723]: time="2024-09-16 18:47:33.734956714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e45c005-385c-4c14-92e5-fa99d56e529a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ca66d6fc74c25       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9779b6ef07763       busybox-7dff88458-npxwd
	c6817a2d4d16a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   01f17e6951bd2       coredns-7c65d6cfc9-jl97q
	a13d60065ee77       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   a2e636e98d9ba       kindnet-pcwtq
	ad4f35e43ce6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e77116881bb7e       storage-provisioner
	744df38e318c9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   6b9acdd74ecdd       kube-proxy-n6hld
	f4ada8d8fc68c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   db85e1e134b2e       kube-scheduler-multinode-588591
	f541570364a8e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   3b839abce4d88       kube-apiserver-multinode-588591
	e5041a44acd42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   2efac27de3b00       kube-controller-manager-multinode-588591
	b8d55c2362a4a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   7e39d511fe6d8       etcd-multinode-588591
	5b97d2f06c63f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   026592543c307       busybox-7dff88458-npxwd
	5a75e89d7ed0f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   bfbed50ea0cf1       coredns-7c65d6cfc9-jl97q
	536b0b65dd5d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   06ea3547c2a84       storage-provisioner
	b7906688e5bed       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   362e382e3e94e       kindnet-pcwtq
	88aaa7fc69945       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   97ebc72e5610a       kube-proxy-n6hld
	0c1a836d4e499       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   16c8a9b30dd01       kube-controller-manager-multinode-588591
	dc6240b9d562f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   9e23e17572925       kube-scheduler-multinode-588591
	8ed063e308eaf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   c31c43bbb7835       etcd-multinode-588591
	6299f0d0edaa8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   e9cbad0a06bab       kube-apiserver-multinode-588591
	
	
	==> coredns [5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9] <==
	[INFO] 10.244.1.2:38361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001803811s
	[INFO] 10.244.1.2:50829 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113376s
	[INFO] 10.244.1.2:52491 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105334s
	[INFO] 10.244.1.2:44029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001375788s
	[INFO] 10.244.1.2:33700 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088704s
	[INFO] 10.244.1.2:57781 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009308s
	[INFO] 10.244.1.2:49707 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089767s
	[INFO] 10.244.0.3:60280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108104s
	[INFO] 10.244.0.3:43474 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165734s
	[INFO] 10.244.0.3:60941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010869s
	[INFO] 10.244.0.3:50648 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068862s
	[INFO] 10.244.1.2:49400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150166s
	[INFO] 10.244.1.2:54278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123973s
	[INFO] 10.244.1.2:58754 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106786s
	[INFO] 10.244.1.2:57389 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093872s
	[INFO] 10.244.0.3:53773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106526s
	[INFO] 10.244.0.3:54541 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199549s
	[INFO] 10.244.0.3:48415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117547s
	[INFO] 10.244.0.3:50287 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117647s
	[INFO] 10.244.1.2:33435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00023337s
	[INFO] 10.244.1.2:60165 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000186825s
	[INFO] 10.244.1.2:36645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123148s
	[INFO] 10.244.1.2:59569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122663s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38189 - 9940 "HINFO IN 6765561091576542386.6857459497422022788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016467007s
	
	
	==> describe nodes <==
	Name:               multinode-588591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-588591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=multinode-588591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T18_39_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-588591
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:47:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    multinode-588591
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b679443c1e1a452cb3b1075c2d8ed8e1
	  System UUID:                b679443c-1e1a-452c-b3b1-075c2d8ed8e1
	  Boot ID:                    b96c48ef-4b97-44e3-8117-b11c1bef2f85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-npxwd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 coredns-7c65d6cfc9-jl97q                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-588591                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m27s
	  kube-system                 kindnet-pcwtq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m22s
	  kube-system                 kube-apiserver-multinode-588591             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-multinode-588591    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-n6hld                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-multinode-588591             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m19s                  kube-proxy       
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m33s (x8 over 8m33s)  kubelet          Node multinode-588591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s (x8 over 8m33s)  kubelet          Node multinode-588591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s (x7 over 8m33s)  kubelet          Node multinode-588591 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet          Node multinode-588591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet          Node multinode-588591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s                  kubelet          Node multinode-588591 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m23s                  node-controller  Node multinode-588591 event: Registered Node multinode-588591 in Controller
	  Normal  NodeReady                8m9s                   kubelet          Node multinode-588591 status is now: NodeReady
	  Normal  Starting                 105s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)    kubelet          Node multinode-588591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)    kubelet          Node multinode-588591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)    kubelet          Node multinode-588591 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                    node-controller  Node multinode-588591 event: Registered Node multinode-588591 in Controller
	
	
	Name:               multinode-588591-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-588591-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=multinode-588591
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_46_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:46:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-588591-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:47:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:46:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:46:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:46:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:46:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-588591-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92c5bfd786184ceea39937b75880871e
	  System UUID:                92c5bfd7-8618-4cee-a399-37b75880871e
	  Boot ID:                    3558c459-ff2c-49bc-8552-f64d372cec00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pdqxd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kindnet-h69tp              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m36s
	  kube-system                 kube-proxy-vcvjk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m36s)  kubelet     Node multinode-588591-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m36s)  kubelet     Node multinode-588591-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m36s)  kubelet     Node multinode-588591-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m15s                  kubelet     Node multinode-588591-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-588591-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-588591-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-588591-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-588591-m02 status is now: NodeReady
	
	
	Name:               multinode-588591-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-588591-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=multinode-588591
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_47_12_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:47:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-588591-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:47:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:47:30 +0000   Mon, 16 Sep 2024 18:47:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:47:30 +0000   Mon, 16 Sep 2024 18:47:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:47:30 +0000   Mon, 16 Sep 2024 18:47:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:47:30 +0000   Mon, 16 Sep 2024 18:47:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    multinode-588591-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 42756ba780f14f6485651bd9cb20d083
	  System UUID:                42756ba7-80f1-4f64-8565-1bd9cb20d083
	  Boot ID:                    25feb12b-c368-4226-b59f-50785e7fd667
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-z7bdt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m36s
	  kube-system                 kube-proxy-8kssm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m40s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m36s)  kubelet          Node multinode-588591-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m36s)  kubelet          Node multinode-588591-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m36s)  kubelet          Node multinode-588591-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m16s                  kubelet          Node multinode-588591-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-588591-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-588591-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-588591-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m25s                  kubelet          Node multinode-588591-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     22s                    cidrAllocator    Node multinode-588591-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-588591-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-588591-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-588591-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-588591-m03 event: Registered Node multinode-588591-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-588591-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066874] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.195843] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.131543] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.306425] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.017981] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.705086] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.065609] kauditd_printk_skb: 158 callbacks suppressed
	[Sep16 18:39] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.091456] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.153180] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.106913] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.971900] kauditd_printk_skb: 60 callbacks suppressed
	[Sep16 18:40] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 18:45] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.149807] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.188900] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.150344] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.283409] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +0.689940] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +1.902382] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[  +4.664651] kauditd_printk_skb: 184 callbacks suppressed
	[  +5.884025] kauditd_printk_skb: 34 callbacks suppressed
	[Sep16 18:46] systemd-fstab-generator[3780]: Ignoring "noauto" option for root device
	[ +19.623369] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9] <==
	{"level":"warn","ts":"2024-09-16T18:40:59.688351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.391584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T18:40:59.688398Z","caller":"traceutil/trace.go:171","msg":"trace[1484628912] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:637; }","duration":"358.439143ms","start":"2024-09-16T18:40:59.329953Z","end":"2024-09-16T18:40:59.688392Z","steps":["trace[1484628912] 'agreement among raft nodes before linearized reading'  (duration: 358.356329ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.688435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.329928Z","time spent":"358.50262ms","remote":"127.0.0.1:34756","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-16T18:40:59.688605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.743719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-588591-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T18:40:59.688648Z","caller":"traceutil/trace.go:171","msg":"trace[1508024205] range","detail":"{range_begin:/registry/csinodes/multinode-588591-m03; range_end:; response_count:0; response_revision:637; }","duration":"234.787382ms","start":"2024-09-16T18:40:59.453855Z","end":"2024-09-16T18:40:59.688643Z","steps":["trace[1508024205] 'agreement among raft nodes before linearized reading'  (duration: 234.732028ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T18:40:59.688789Z","caller":"traceutil/trace.go:171","msg":"trace[1574818506] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"358.788516ms","start":"2024-09-16T18:40:59.329994Z","end":"2024-09-16T18:40:59.688783Z","steps":["trace[1574818506] 'process raft request'  (duration: 357.312773ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.689399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.329989Z","time spent":"359.385691ms","remote":"127.0.0.1:58052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":699,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-8kssm.17f5cd920c757e30\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-8kssm.17f5cd920c757e30\" value_size:619 lease:4160642142840671290 >> failure:<>"}
	{"level":"info","ts":"2024-09-16T18:40:59.689635Z","caller":"traceutil/trace.go:171","msg":"trace[1677215337] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"356.667825ms","start":"2024-09-16T18:40:59.332957Z","end":"2024-09-16T18:40:59.689625Z","steps":["trace[1677215337] 'process raft request'  (duration: 354.914471ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.689726Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.332939Z","time spent":"356.765835ms","remote":"127.0.0.1:58052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":657,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet.17f5cd920c9f2600\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet.17f5cd920c9f2600\" value_size:586 lease:4160642142840671290 >> failure:<>"}
	{"level":"info","ts":"2024-09-16T18:40:59.689974Z","caller":"traceutil/trace.go:171","msg":"trace[1642580106] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"355.522411ms","start":"2024-09-16T18:40:59.334445Z","end":"2024-09-16T18:40:59.689967Z","steps":["trace[1642580106] 'process raft request'  (duration: 353.468257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.690048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.334429Z","time spent":"355.586868ms","remote":"127.0.0.1:58470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4708,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:526 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4660 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"info","ts":"2024-09-16T18:40:59.690225Z","caller":"traceutil/trace.go:171","msg":"trace[1405851371] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"354.43609ms","start":"2024-09-16T18:40:59.335781Z","end":"2024-09-16T18:40:59.690217Z","steps":["trace[1405851371] 'process raft request'  (duration: 352.172714ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.690322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.335768Z","time spent":"354.532382ms","remote":"127.0.0.1:58150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:609 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T18:40:59.690505Z","caller":"traceutil/trace.go:171","msg":"trace[943818931] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"353.34272ms","start":"2024-09-16T18:40:59.337156Z","end":"2024-09-16T18:40:59.690499Z","steps":["trace[943818931] 'process raft request'  (duration: 350.833953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.690610Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.337144Z","time spent":"353.443052ms","remote":"127.0.0.1:58156","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2331,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-588591-m03\" mod_revision:623 > success:<request_put:<key:\"/registry/minions/multinode-588591-m03\" value_size:2285 >> failure:<request_range:<key:\"/registry/minions/multinode-588591-m03\" > >"}
	{"level":"info","ts":"2024-09-16T18:44:14.734761Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T18:44:14.734900Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-588591","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	{"level":"warn","ts":"2024-09-16T18:44:14.735070Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:44:14.735223Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:44:14.772116Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:44:14.772159Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T18:44:14.773596Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d381aaacda0b9bd","current-leader-member-id":"8d381aaacda0b9bd"}
	{"level":"info","ts":"2024-09-16T18:44:14.777152Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-16T18:44:14.777272Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-16T18:44:14.777300Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-588591","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	
	
	==> etcd [b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311] <==
	{"level":"info","ts":"2024-09-16T18:45:50.587886Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-16T18:45:50.583125Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T18:45:50.587906Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T18:45:50.587916Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T18:45:50.583177Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T18:45:50.584130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd switched to configuration voters=(10175912678940260797)"}
	{"level":"info","ts":"2024-09-16T18:45:50.588830Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cf3a1558a63fa9e","local-member-id":"8d381aaacda0b9bd","added-peer-id":"8d381aaacda0b9bd","added-peer-peer-urls":["https://192.168.39.90:2380"]}
	{"level":"info","ts":"2024-09-16T18:45:50.588984Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf3a1558a63fa9e","local-member-id":"8d381aaacda0b9bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T18:45:50.589037Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T18:45:51.631604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T18:45:51.631727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T18:45:51.631780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd received MsgPreVoteResp from 8d381aaacda0b9bd at term 2"}
	{"level":"info","ts":"2024-09-16T18:45:51.631818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.631843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.631870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became leader at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.631896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.634630Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8d381aaacda0b9bd","local-member-attributes":"{Name:multinode-588591 ClientURLs:[https://192.168.39.90:2379]}","request-path":"/0/members/8d381aaacda0b9bd/attributes","cluster-id":"8cf3a1558a63fa9e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T18:45:51.634852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T18:45:51.635200Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T18:45:51.635962Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T18:45:51.636741Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T18:45:51.637356Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T18:45:51.638085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.90:2379"}
	{"level":"info","ts":"2024-09-16T18:45:51.638159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T18:45:51.638220Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:47:34 up 9 min,  0 users,  load average: 0.46, 0.40, 0.19
	Linux multinode-588591 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99] <==
	I0916 18:46:55.384226       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:46:55.384259       1 main.go:299] handling current node
	I0916 18:46:55.384297       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:46:55.384303       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:46:55.384429       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:46:55.384457       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:47:05.383383       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:47:05.383603       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:47:05.383969       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:47:05.384053       1 main.go:299] handling current node
	I0916 18:47:05.384083       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:47:05.384158       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:47:15.383319       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:47:15.383352       1 main.go:299] handling current node
	I0916 18:47:15.383365       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:47:15.383370       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:47:15.383609       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:47:15.383617       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.2.0/24] 
	I0916 18:47:15.383667       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.195 Flags: [] Table: 0} 
	I0916 18:47:25.383879       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:47:25.383933       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.2.0/24] 
	I0916 18:47:25.384072       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:47:25.384082       1 main.go:299] handling current node
	I0916 18:47:25.384092       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:47:25.384096       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c] <==
	I0916 18:43:25.497910       1 main.go:299] handling current node
	I0916 18:43:35.499225       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:43:35.499438       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:43:35.499698       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:43:35.499733       1 main.go:299] handling current node
	I0916 18:43:35.499770       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:43:35.499789       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:43:45.500021       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:43:45.500162       1 main.go:299] handling current node
	I0916 18:43:45.500253       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:43:45.500343       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:43:45.500654       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:43:45.500689       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:43:55.491282       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:43:55.491401       1 main.go:299] handling current node
	I0916 18:43:55.491443       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:43:55.491465       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:43:55.491702       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:43:55.491736       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:44:05.492939       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:44:05.493087       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:44:05.493289       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:44:05.493334       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:44:05.493410       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:44:05.493430       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f] <==
	W0916 18:44:14.754100       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754121       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754141       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754159       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754179       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754200       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754219       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754237       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754257       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764062       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764087       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764106       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764125       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764145       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764163       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764181       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764199       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764220       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764239       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764260       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764278       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764297       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764316       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764343       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764365       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54] <==
	I0916 18:45:53.095859       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 18:45:53.095908       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 18:45:53.101162       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 18:45:53.101374       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 18:45:53.101447       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 18:45:53.101641       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 18:45:53.106441       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 18:45:53.106837       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 18:45:53.114980       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 18:45:53.115014       1 policy_source.go:224] refreshing policies
	I0916 18:45:53.118817       1 aggregator.go:171] initial CRD sync complete...
	I0916 18:45:53.118846       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 18:45:53.118852       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 18:45:53.118858       1 cache.go:39] Caches are synced for autoregister controller
	I0916 18:45:53.122418       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0916 18:45:53.128356       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 18:45:53.200257       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 18:45:54.006497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 18:45:55.642612       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 18:45:55.909915       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 18:45:55.921869       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 18:45:56.008025       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 18:45:56.016810       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 18:45:56.837367       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 18:45:56.887644       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4] <==
	I0916 18:41:47.834440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.031349       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-588591-m03\" does not exist"
	I0916 18:41:49.032706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:41:49.044248       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-588591-m03" podCIDRs=["10.244.4.0/24"]
	I0916 18:41:49.044705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.045025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.062141       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.067691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.362874       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.713089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:51.504042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:59.171072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:09.123809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:42:09.124049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:09.137901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:11.467801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:51.484223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:51.484577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:42:51.502320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:56.518244       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:42:56.532132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:42:56.554688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:56.573920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.532187ms"
	I0916 18:42:56.574643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.697µs"
	I0916 18:43:06.637397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	
	
	==> kube-controller-manager [e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6] <==
	I0916 18:46:56.553294       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:46:57.093796       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.604209ms"
	I0916 18:46:57.093986       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="103.805µs"
	I0916 18:47:04.422203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:47:10.716593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:10.738984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:10.979833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:10.979944       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:47:12.245708       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-588591-m03\" does not exist"
	I0916 18:47:12.245911       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:47:12.273385       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-588591-m03" podCIDRs=["10.244.2.0/24"]
	I0916 18:47:12.273437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	E0916 18:47:12.289248       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-588591-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-588591-m03" podCIDRs=["10.244.3.0/24"]
	E0916 18:47:12.289361       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-588591-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-588591-m03"
	E0916 18:47:12.289412       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-588591-m03': failed to patch node CIDR: Node \"multinode-588591-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 18:47:12.289434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:12.294777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:12.511336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:12.861195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:16.652447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:22.398370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:30.813019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:30.813091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:47:30.825300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:31.574963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	
	
	==> kube-proxy [744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:45:54.567608       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 18:45:54.578731       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.90"]
	E0916 18:45:54.579054       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:45:54.666269       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:45:54.666359       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:45:54.666399       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:45:54.673960       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:45:54.675847       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:45:54.676215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:45:54.688222       1 config.go:199] "Starting service config controller"
	I0916 18:45:54.689886       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:45:54.697403       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:45:54.697480       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:45:54.709792       1 config.go:328] "Starting node config controller"
	I0916 18:45:54.711847       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:45:54.796476       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:45:54.798342       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 18:45:54.812078       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:39:14.672200       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 18:39:14.682030       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.90"]
	E0916 18:39:14.683634       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:39:14.747729       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:39:14.747770       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:39:14.747793       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:39:14.750415       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:39:14.750920       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:39:14.750966       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:39:14.752233       1 config.go:199] "Starting service config controller"
	I0916 18:39:14.752326       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:39:14.752426       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:39:14.752450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:39:14.753092       1 config.go:328] "Starting node config controller"
	I0916 18:39:14.754756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:39:14.852730       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 18:39:14.852794       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:39:14.854963       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1] <==
	E0916 18:39:04.503392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:04.501478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 18:39:04.503444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.303749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 18:39:05.303779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.344431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 18:39:05.344488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.351839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 18:39:05.351888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.443594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 18:39:05.443738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.508403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 18:39:05.508455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.602832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 18:39:05.602880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.615661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 18:39:05.615713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.615781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 18:39:05.615792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.764096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 18:39:05.764145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.961645       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 18:39:05.961770       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 18:39:08.377594       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 18:44:14.748929       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733] <==
	I0916 18:45:51.446869       1 serving.go:386] Generated self-signed cert in-memory
	W0916 18:45:53.046455       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 18:45:53.046565       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 18:45:53.047790       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 18:45:53.048633       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 18:45:53.121678       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 18:45:53.122785       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:45:53.128338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 18:45:53.128600       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 18:45:53.129321       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 18:45:53.129698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 18:45:53.230112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 18:45:59 multinode-588591 kubelet[2931]: E0916 18:45:59.619176    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512359618409666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:45:59 multinode-588591 kubelet[2931]: I0916 18:45:59.815635    2931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 18:46:09 multinode-588591 kubelet[2931]: E0916 18:46:09.622785    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512369621848744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:09 multinode-588591 kubelet[2931]: E0916 18:46:09.623800    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512369621848744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:19 multinode-588591 kubelet[2931]: E0916 18:46:19.625263    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512379625035520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:19 multinode-588591 kubelet[2931]: E0916 18:46:19.625288    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512379625035520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:29 multinode-588591 kubelet[2931]: E0916 18:46:29.626856    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512389626507960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:29 multinode-588591 kubelet[2931]: E0916 18:46:29.626890    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512389626507960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:39 multinode-588591 kubelet[2931]: E0916 18:46:39.628350    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512399627856272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:39 multinode-588591 kubelet[2931]: E0916 18:46:39.628786    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512399627856272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:49 multinode-588591 kubelet[2931]: E0916 18:46:49.630386    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512409630191781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:49 multinode-588591 kubelet[2931]: E0916 18:46:49.630408    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512409630191781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:49 multinode-588591 kubelet[2931]: E0916 18:46:49.634473    2931 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:46:49 multinode-588591 kubelet[2931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:46:49 multinode-588591 kubelet[2931]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:46:49 multinode-588591 kubelet[2931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:46:49 multinode-588591 kubelet[2931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:46:59 multinode-588591 kubelet[2931]: E0916 18:46:59.631748    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512419631381720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:46:59 multinode-588591 kubelet[2931]: E0916 18:46:59.631775    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512419631381720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:47:09 multinode-588591 kubelet[2931]: E0916 18:47:09.632815    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512429632480395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:47:09 multinode-588591 kubelet[2931]: E0916 18:47:09.632856    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512429632480395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:47:19 multinode-588591 kubelet[2931]: E0916 18:47:19.634971    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512439634330742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:47:19 multinode-588591 kubelet[2931]: E0916 18:47:19.634994    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512439634330742,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:47:29 multinode-588591 kubelet[2931]: E0916 18:47:29.636957    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512449636482883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:47:29 multinode-588591 kubelet[2931]: E0916 18:47:29.636999    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512449636482883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 18:47:33.310213  412469 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19649-371203/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-588591 -n multinode-588591
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-588591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 stop
E0916 18:48:56.985178  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-588591 stop: exit status 82 (2m0.482665743s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-588591-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-588591 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-588591 status: exit status 3 (18.808605198s)

                                                
                                                
-- stdout --
	multinode-588591
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-588591-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 18:49:56.465304  413144 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host
	E0916 18:49:56.465342  413144 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.58:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-588591 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-588591 -n multinode-588591
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-588591 logs -n 25: (1.530143459s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591:/home/docker/cp-test_multinode-588591-m02_multinode-588591.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591 sudo cat                                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m02_multinode-588591.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03:/home/docker/cp-test_multinode-588591-m02_multinode-588591-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591-m03 sudo cat                                   | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m02_multinode-588591-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp testdata/cp-test.txt                                                | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3158127858/001/cp-test_multinode-588591-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591:/home/docker/cp-test_multinode-588591-m03_multinode-588591.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591 sudo cat                                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m03_multinode-588591.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt                       | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m02:/home/docker/cp-test_multinode-588591-m03_multinode-588591-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n                                                                 | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | multinode-588591-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-588591 ssh -n multinode-588591-m02 sudo cat                                   | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	|         | /home/docker/cp-test_multinode-588591-m03_multinode-588591-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-588591 node stop m03                                                          | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:41 UTC |
	| node    | multinode-588591 node start                                                             | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:41 UTC | 16 Sep 24 18:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-588591                                                                | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:42 UTC |                     |
	| stop    | -p multinode-588591                                                                     | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:42 UTC |                     |
	| start   | -p multinode-588591                                                                     | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:44 UTC | 16 Sep 24 18:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-588591                                                                | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:47 UTC |                     |
	| node    | multinode-588591 node delete                                                            | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:47 UTC | 16 Sep 24 18:47 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-588591 stop                                                                   | multinode-588591 | jenkins | v1.34.0 | 16 Sep 24 18:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 18:44:13
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 18:44:13.863503  411348 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:44:13.863633  411348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:44:13.863642  411348 out.go:358] Setting ErrFile to fd 2...
	I0916 18:44:13.863647  411348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:44:13.863855  411348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:44:13.864413  411348 out.go:352] Setting JSON to false
	I0916 18:44:13.865399  411348 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":8797,"bootTime":1726503457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:44:13.865509  411348 start.go:139] virtualization: kvm guest
	I0916 18:44:13.868254  411348 out.go:177] * [multinode-588591] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:44:13.870128  411348 notify.go:220] Checking for updates...
	I0916 18:44:13.870157  411348 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:44:13.872014  411348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:44:13.873728  411348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:44:13.875487  411348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:44:13.877107  411348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:44:13.878582  411348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:44:13.880584  411348 config.go:182] Loaded profile config "multinode-588591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:44:13.880714  411348 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:44:13.881374  411348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:44:13.881448  411348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:44:13.896615  411348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0916 18:44:13.897171  411348 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:44:13.897782  411348 main.go:141] libmachine: Using API Version  1
	I0916 18:44:13.897823  411348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:44:13.898194  411348 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:44:13.898378  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:44:13.934576  411348 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 18:44:13.936011  411348 start.go:297] selected driver: kvm2
	I0916 18:44:13.936028  411348 start.go:901] validating driver "kvm2" against &{Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:44:13.936191  411348 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:44:13.936517  411348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:44:13.936589  411348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 18:44:13.952024  411348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 18:44:13.952705  411348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 18:44:13.952751  411348 cni.go:84] Creating CNI manager for ""
	I0916 18:44:13.952821  411348 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 18:44:13.952905  411348 start.go:340] cluster config:
	{Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-588591 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:44:13.953097  411348 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 18:44:13.955038  411348 out.go:177] * Starting "multinode-588591" primary control-plane node in "multinode-588591" cluster
	I0916 18:44:13.956796  411348 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:44:13.956841  411348 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 18:44:13.956854  411348 cache.go:56] Caching tarball of preloaded images
	I0916 18:44:13.956961  411348 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 18:44:13.956974  411348 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 18:44:13.957118  411348 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/config.json ...
	I0916 18:44:13.957337  411348 start.go:360] acquireMachinesLock for multinode-588591: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 18:44:13.957393  411348 start.go:364] duration metric: took 34.341µs to acquireMachinesLock for "multinode-588591"
	I0916 18:44:13.957412  411348 start.go:96] Skipping create...Using existing machine configuration
	I0916 18:44:13.957420  411348 fix.go:54] fixHost starting: 
	I0916 18:44:13.957690  411348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:44:13.957761  411348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:44:13.972726  411348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0916 18:44:13.973294  411348 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:44:13.973868  411348 main.go:141] libmachine: Using API Version  1
	I0916 18:44:13.973902  411348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:44:13.974201  411348 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:44:13.974401  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:44:13.974543  411348 main.go:141] libmachine: (multinode-588591) Calling .GetState
	I0916 18:44:13.976212  411348 fix.go:112] recreateIfNeeded on multinode-588591: state=Running err=<nil>
	W0916 18:44:13.976237  411348 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 18:44:13.978516  411348 out.go:177] * Updating the running kvm2 "multinode-588591" VM ...
	I0916 18:44:13.980071  411348 machine.go:93] provisionDockerMachine start ...
	I0916 18:44:13.980094  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:44:13.980310  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:13.982608  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:13.983040  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:13.983064  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:13.983253  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:13.983429  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:13.983603  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:13.983767  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:13.983973  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:13.984241  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:13.984262  411348 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 18:44:14.090399  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-588591
	
	I0916 18:44:14.090439  411348 main.go:141] libmachine: (multinode-588591) Calling .GetMachineName
	I0916 18:44:14.090684  411348 buildroot.go:166] provisioning hostname "multinode-588591"
	I0916 18:44:14.090711  411348 main.go:141] libmachine: (multinode-588591) Calling .GetMachineName
	I0916 18:44:14.090996  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.093763  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.094211  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.094280  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.094399  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.094601  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.094767  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.094903  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.095164  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:14.095330  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:14.095342  411348 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-588591 && echo "multinode-588591" | sudo tee /etc/hostname
	I0916 18:44:14.216124  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-588591
	
	I0916 18:44:14.216156  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.219121  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.219481  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.219506  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.219764  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.219984  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.220214  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.220450  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.220683  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:14.220876  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:14.220893  411348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-588591' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-588591/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-588591' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 18:44:14.326224  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 18:44:14.326263  411348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 18:44:14.326290  411348 buildroot.go:174] setting up certificates
	I0916 18:44:14.326302  411348 provision.go:84] configureAuth start
	I0916 18:44:14.326311  411348 main.go:141] libmachine: (multinode-588591) Calling .GetMachineName
	I0916 18:44:14.326629  411348 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:44:14.329598  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.330051  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.330074  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.330217  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.332198  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.332516  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.332552  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.332673  411348 provision.go:143] copyHostCerts
	I0916 18:44:14.332712  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:44:14.332749  411348 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 18:44:14.332761  411348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 18:44:14.332841  411348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 18:44:14.332977  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:44:14.333001  411348 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 18:44:14.333009  411348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 18:44:14.333050  411348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 18:44:14.333129  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:44:14.333152  411348 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 18:44:14.333160  411348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 18:44:14.333192  411348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 18:44:14.333296  411348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.multinode-588591 san=[127.0.0.1 192.168.39.90 localhost minikube multinode-588591]
	I0916 18:44:14.455816  411348 provision.go:177] copyRemoteCerts
	I0916 18:44:14.455890  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 18:44:14.455916  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.459199  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.459589  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.459620  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.459823  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.460036  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.460223  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.460452  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:44:14.543759  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 18:44:14.543834  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 18:44:14.569381  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 18:44:14.569484  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 18:44:14.596488  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 18:44:14.596568  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 18:44:14.624074  411348 provision.go:87] duration metric: took 297.756485ms to configureAuth
	I0916 18:44:14.624110  411348 buildroot.go:189] setting minikube options for container-runtime
	I0916 18:44:14.624337  411348 config.go:182] Loaded profile config "multinode-588591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:44:14.624414  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:44:14.627254  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.627665  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:44:14.627696  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:44:14.627916  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:44:14.628095  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.628247  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:44:14.628353  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:44:14.628544  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:44:14.628759  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:44:14.628774  411348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 18:45:45.337930  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 18:45:45.337979  411348 machine.go:96] duration metric: took 1m31.357894451s to provisionDockerMachine
	I0916 18:45:45.337994  411348 start.go:293] postStartSetup for "multinode-588591" (driver="kvm2")
	I0916 18:45:45.338018  411348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 18:45:45.338044  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.338430  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 18:45:45.338464  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.341618  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.342117  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.342142  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.342295  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.342496  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.342713  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.342901  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:45:45.425086  411348 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 18:45:45.429680  411348 command_runner.go:130] > NAME=Buildroot
	I0916 18:45:45.429703  411348 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0916 18:45:45.429707  411348 command_runner.go:130] > ID=buildroot
	I0916 18:45:45.429712  411348 command_runner.go:130] > VERSION_ID=2023.02.9
	I0916 18:45:45.429717  411348 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0916 18:45:45.429791  411348 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 18:45:45.429815  411348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 18:45:45.429880  411348 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 18:45:45.429982  411348 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 18:45:45.429995  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /etc/ssl/certs/3784632.pem
	I0916 18:45:45.430089  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 18:45:45.440460  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:45:45.465659  411348 start.go:296] duration metric: took 127.648709ms for postStartSetup
	I0916 18:45:45.465705  411348 fix.go:56] duration metric: took 1m31.508285808s for fixHost
	I0916 18:45:45.465728  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.468638  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.469041  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.469067  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.469237  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.469434  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.469586  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.469742  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.469931  411348 main.go:141] libmachine: Using SSH client type: native
	I0916 18:45:45.470115  411348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.90 22 <nil> <nil>}
	I0916 18:45:45.470126  411348 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 18:45:45.574106  411348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726512345.540426972
	
	I0916 18:45:45.574134  411348 fix.go:216] guest clock: 1726512345.540426972
	I0916 18:45:45.574144  411348 fix.go:229] Guest: 2024-09-16 18:45:45.540426972 +0000 UTC Remote: 2024-09-16 18:45:45.465709078 +0000 UTC m=+91.640325179 (delta=74.717894ms)
	I0916 18:45:45.574192  411348 fix.go:200] guest clock delta is within tolerance: 74.717894ms
	I0916 18:45:45.574199  411348 start.go:83] releasing machines lock for "multinode-588591", held for 1m31.616794864s
	I0916 18:45:45.574226  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.574508  411348 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:45:45.577580  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.578029  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.578077  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.578240  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.578861  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.579027  411348 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:45:45.579103  411348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 18:45:45.579172  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.579249  411348 ssh_runner.go:195] Run: cat /version.json
	I0916 18:45:45.579273  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:45:45.581967  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.582366  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.582397  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.582483  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.582546  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.582719  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.582865  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.582974  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:45.582999  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:45:45.583015  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:45.583179  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:45:45.583353  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:45:45.583530  411348 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:45:45.583684  411348 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:45:45.677915  411348 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 18:45:45.678456  411348 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "fcd4ba3dbb1ef408e3a4b79c864df2496ddd3848"}
	I0916 18:45:45.678630  411348 ssh_runner.go:195] Run: systemctl --version
	I0916 18:45:45.685140  411348 command_runner.go:130] > systemd 252 (252)
	I0916 18:45:45.685184  411348 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0916 18:45:45.685260  411348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 18:45:45.851793  411348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 18:45:45.859540  411348 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0916 18:45:45.860041  411348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 18:45:45.860101  411348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 18:45:45.869780  411348 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 18:45:45.869817  411348 start.go:495] detecting cgroup driver to use...
	I0916 18:45:45.869881  411348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 18:45:45.886267  411348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 18:45:45.900551  411348 docker.go:217] disabling cri-docker service (if available) ...
	I0916 18:45:45.900608  411348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 18:45:45.915265  411348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 18:45:45.929411  411348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 18:45:46.077710  411348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 18:45:46.219053  411348 docker.go:233] disabling docker service ...
	I0916 18:45:46.219125  411348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 18:45:46.236265  411348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 18:45:46.250695  411348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 18:45:46.415489  411348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 18:45:46.557701  411348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 18:45:46.573526  411348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 18:45:46.592765  411348 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0916 18:45:46.593083  411348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 18:45:46.593141  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.604628  411348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 18:45:46.604702  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.616703  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.628344  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.639621  411348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 18:45:46.651632  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.662818  411348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.674145  411348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 18:45:46.686227  411348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 18:45:46.695619  411348 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 18:45:46.695700  411348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 18:45:46.705190  411348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:45:46.839353  411348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 18:45:47.048132  411348 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 18:45:47.048211  411348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 18:45:47.053611  411348 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0916 18:45:47.053646  411348 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 18:45:47.053656  411348 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0916 18:45:47.053673  411348 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 18:45:47.053679  411348 command_runner.go:130] > Access: 2024-09-16 18:45:46.897025857 +0000
	I0916 18:45:47.053685  411348 command_runner.go:130] > Modify: 2024-09-16 18:45:46.897025857 +0000
	I0916 18:45:47.053690  411348 command_runner.go:130] > Change: 2024-09-16 18:45:46.897025857 +0000
	I0916 18:45:47.053694  411348 command_runner.go:130] >  Birth: -
	I0916 18:45:47.053739  411348 start.go:563] Will wait 60s for crictl version
	I0916 18:45:47.053783  411348 ssh_runner.go:195] Run: which crictl
	I0916 18:45:47.057935  411348 command_runner.go:130] > /usr/bin/crictl
	I0916 18:45:47.058020  411348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 18:45:47.102737  411348 command_runner.go:130] > Version:  0.1.0
	I0916 18:45:47.102772  411348 command_runner.go:130] > RuntimeName:  cri-o
	I0916 18:45:47.102785  411348 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0916 18:45:47.102793  411348 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 18:45:47.103966  411348 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 18:45:47.104052  411348 ssh_runner.go:195] Run: crio --version
	I0916 18:45:47.133658  411348 command_runner.go:130] > crio version 1.29.1
	I0916 18:45:47.133689  411348 command_runner.go:130] > Version:        1.29.1
	I0916 18:45:47.133699  411348 command_runner.go:130] > GitCommit:      unknown
	I0916 18:45:47.133705  411348 command_runner.go:130] > GitCommitDate:  unknown
	I0916 18:45:47.133711  411348 command_runner.go:130] > GitTreeState:   clean
	I0916 18:45:47.133719  411348 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0916 18:45:47.133727  411348 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 18:45:47.133732  411348 command_runner.go:130] > Compiler:       gc
	I0916 18:45:47.133739  411348 command_runner.go:130] > Platform:       linux/amd64
	I0916 18:45:47.133746  411348 command_runner.go:130] > Linkmode:       dynamic
	I0916 18:45:47.133753  411348 command_runner.go:130] > BuildTags:      
	I0916 18:45:47.133760  411348 command_runner.go:130] >   containers_image_ostree_stub
	I0916 18:45:47.133767  411348 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 18:45:47.133773  411348 command_runner.go:130] >   btrfs_noversion
	I0916 18:45:47.133781  411348 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 18:45:47.133788  411348 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 18:45:47.133794  411348 command_runner.go:130] >   seccomp
	I0916 18:45:47.133803  411348 command_runner.go:130] > LDFlags:          unknown
	I0916 18:45:47.133809  411348 command_runner.go:130] > SeccompEnabled:   true
	I0916 18:45:47.133817  411348 command_runner.go:130] > AppArmorEnabled:  false
	I0916 18:45:47.134888  411348 ssh_runner.go:195] Run: crio --version
	I0916 18:45:47.162864  411348 command_runner.go:130] > crio version 1.29.1
	I0916 18:45:47.162895  411348 command_runner.go:130] > Version:        1.29.1
	I0916 18:45:47.162904  411348 command_runner.go:130] > GitCommit:      unknown
	I0916 18:45:47.162911  411348 command_runner.go:130] > GitCommitDate:  unknown
	I0916 18:45:47.162917  411348 command_runner.go:130] > GitTreeState:   clean
	I0916 18:45:47.162935  411348 command_runner.go:130] > BuildDate:      2024-09-16T15:42:14Z
	I0916 18:45:47.162943  411348 command_runner.go:130] > GoVersion:      go1.21.6
	I0916 18:45:47.162950  411348 command_runner.go:130] > Compiler:       gc
	I0916 18:45:47.162957  411348 command_runner.go:130] > Platform:       linux/amd64
	I0916 18:45:47.162964  411348 command_runner.go:130] > Linkmode:       dynamic
	I0916 18:45:47.162975  411348 command_runner.go:130] > BuildTags:      
	I0916 18:45:47.162985  411348 command_runner.go:130] >   containers_image_ostree_stub
	I0916 18:45:47.162993  411348 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0916 18:45:47.163000  411348 command_runner.go:130] >   btrfs_noversion
	I0916 18:45:47.163010  411348 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0916 18:45:47.163018  411348 command_runner.go:130] >   libdm_no_deferred_remove
	I0916 18:45:47.163023  411348 command_runner.go:130] >   seccomp
	I0916 18:45:47.163031  411348 command_runner.go:130] > LDFlags:          unknown
	I0916 18:45:47.163039  411348 command_runner.go:130] > SeccompEnabled:   true
	I0916 18:45:47.163049  411348 command_runner.go:130] > AppArmorEnabled:  false
	I0916 18:45:47.166577  411348 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 18:45:47.168242  411348 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:45:47.171075  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:47.171499  411348 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:45:47.171521  411348 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:45:47.171780  411348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 18:45:47.176144  411348 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0916 18:45:47.176238  411348 kubeadm.go:883] updating cluster {Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 18:45:47.176364  411348 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 18:45:47.176403  411348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:45:47.219665  411348 command_runner.go:130] > {
	I0916 18:45:47.219688  411348 command_runner.go:130] >   "images": [
	I0916 18:45:47.219692  411348 command_runner.go:130] >     {
	I0916 18:45:47.219701  411348 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 18:45:47.219706  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219713  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 18:45:47.219716  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219720  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219729  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 18:45:47.219736  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 18:45:47.219740  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219744  411348 command_runner.go:130] >       "size": "87190579",
	I0916 18:45:47.219747  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.219751  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.219758  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.219762  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.219765  411348 command_runner.go:130] >     },
	I0916 18:45:47.219768  411348 command_runner.go:130] >     {
	I0916 18:45:47.219774  411348 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 18:45:47.219778  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219783  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 18:45:47.219787  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219791  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219797  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 18:45:47.219810  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 18:45:47.219814  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219819  411348 command_runner.go:130] >       "size": "1363676",
	I0916 18:45:47.219823  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.219837  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.219843  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.219847  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.219852  411348 command_runner.go:130] >     },
	I0916 18:45:47.219855  411348 command_runner.go:130] >     {
	I0916 18:45:47.219861  411348 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 18:45:47.219867  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219872  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 18:45:47.219877  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219881  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219889  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 18:45:47.219898  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 18:45:47.219902  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219906  411348 command_runner.go:130] >       "size": "31470524",
	I0916 18:45:47.219910  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.219914  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.219920  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.219924  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.219928  411348 command_runner.go:130] >     },
	I0916 18:45:47.219932  411348 command_runner.go:130] >     {
	I0916 18:45:47.219938  411348 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 18:45:47.219944  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.219949  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 18:45:47.219954  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219958  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.219966  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 18:45:47.219980  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 18:45:47.219985  411348 command_runner.go:130] >       ],
	I0916 18:45:47.219989  411348 command_runner.go:130] >       "size": "63273227",
	I0916 18:45:47.220000  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.220007  411348 command_runner.go:130] >       "username": "nonroot",
	I0916 18:45:47.220011  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220017  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220021  411348 command_runner.go:130] >     },
	I0916 18:45:47.220026  411348 command_runner.go:130] >     {
	I0916 18:45:47.220034  411348 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 18:45:47.220040  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220045  411348 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 18:45:47.220051  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220055  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220064  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 18:45:47.220073  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 18:45:47.220078  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220082  411348 command_runner.go:130] >       "size": "149009664",
	I0916 18:45:47.220086  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220091  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220094  411348 command_runner.go:130] >       },
	I0916 18:45:47.220100  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220105  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220110  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220114  411348 command_runner.go:130] >     },
	I0916 18:45:47.220120  411348 command_runner.go:130] >     {
	I0916 18:45:47.220125  411348 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 18:45:47.220131  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220136  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 18:45:47.220142  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220145  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220154  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 18:45:47.220163  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 18:45:47.220169  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220173  411348 command_runner.go:130] >       "size": "95237600",
	I0916 18:45:47.220182  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220194  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220200  411348 command_runner.go:130] >       },
	I0916 18:45:47.220204  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220210  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220214  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220220  411348 command_runner.go:130] >     },
	I0916 18:45:47.220223  411348 command_runner.go:130] >     {
	I0916 18:45:47.220231  411348 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 18:45:47.220235  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220240  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 18:45:47.220246  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220250  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220259  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 18:45:47.220266  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 18:45:47.220271  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220275  411348 command_runner.go:130] >       "size": "89437508",
	I0916 18:45:47.220284  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220288  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220295  411348 command_runner.go:130] >       },
	I0916 18:45:47.220301  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220305  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220311  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220315  411348 command_runner.go:130] >     },
	I0916 18:45:47.220320  411348 command_runner.go:130] >     {
	I0916 18:45:47.220326  411348 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 18:45:47.220333  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220341  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 18:45:47.220344  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220351  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220365  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 18:45:47.220374  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 18:45:47.220380  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220385  411348 command_runner.go:130] >       "size": "92733849",
	I0916 18:45:47.220391  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.220395  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220399  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220403  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220406  411348 command_runner.go:130] >     },
	I0916 18:45:47.220409  411348 command_runner.go:130] >     {
	I0916 18:45:47.220415  411348 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 18:45:47.220418  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220423  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 18:45:47.220427  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220430  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220437  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 18:45:47.220444  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 18:45:47.220447  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220451  411348 command_runner.go:130] >       "size": "68420934",
	I0916 18:45:47.220454  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220457  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.220460  411348 command_runner.go:130] >       },
	I0916 18:45:47.220464  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220467  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220471  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.220474  411348 command_runner.go:130] >     },
	I0916 18:45:47.220477  411348 command_runner.go:130] >     {
	I0916 18:45:47.220483  411348 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 18:45:47.220488  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.220492  411348 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 18:45:47.220496  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220501  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.220509  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 18:45:47.220516  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 18:45:47.220522  411348 command_runner.go:130] >       ],
	I0916 18:45:47.220526  411348 command_runner.go:130] >       "size": "742080",
	I0916 18:45:47.220529  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.220534  411348 command_runner.go:130] >         "value": "65535"
	I0916 18:45:47.220539  411348 command_runner.go:130] >       },
	I0916 18:45:47.220543  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.220548  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.220552  411348 command_runner.go:130] >       "pinned": true
	I0916 18:45:47.220558  411348 command_runner.go:130] >     }
	I0916 18:45:47.220563  411348 command_runner.go:130] >   ]
	I0916 18:45:47.220567  411348 command_runner.go:130] > }
	I0916 18:45:47.221094  411348 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:45:47.221118  411348 crio.go:433] Images already preloaded, skipping extraction
	I0916 18:45:47.221181  411348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 18:45:47.254888  411348 command_runner.go:130] > {
	I0916 18:45:47.254920  411348 command_runner.go:130] >   "images": [
	I0916 18:45:47.254927  411348 command_runner.go:130] >     {
	I0916 18:45:47.254940  411348 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 18:45:47.254948  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.254975  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 18:45:47.254979  411348 command_runner.go:130] >       ],
	I0916 18:45:47.254984  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.254992  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0916 18:45:47.255000  411348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 18:45:47.255003  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255008  411348 command_runner.go:130] >       "size": "87190579",
	I0916 18:45:47.255013  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255017  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255031  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255037  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255041  411348 command_runner.go:130] >     },
	I0916 18:45:47.255044  411348 command_runner.go:130] >     {
	I0916 18:45:47.255050  411348 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 18:45:47.255056  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255061  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 18:45:47.255066  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255070  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255079  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0916 18:45:47.255086  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 18:45:47.255092  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255095  411348 command_runner.go:130] >       "size": "1363676",
	I0916 18:45:47.255100  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255107  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255113  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255117  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255121  411348 command_runner.go:130] >     },
	I0916 18:45:47.255124  411348 command_runner.go:130] >     {
	I0916 18:45:47.255132  411348 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 18:45:47.255137  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255143  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 18:45:47.255147  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255151  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255160  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0916 18:45:47.255170  411348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0916 18:45:47.255173  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255177  411348 command_runner.go:130] >       "size": "31470524",
	I0916 18:45:47.255181  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255185  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255189  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255192  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255195  411348 command_runner.go:130] >     },
	I0916 18:45:47.255199  411348 command_runner.go:130] >     {
	I0916 18:45:47.255205  411348 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 18:45:47.255212  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255219  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 18:45:47.255222  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255226  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255233  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0916 18:45:47.255248  411348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0916 18:45:47.255254  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255259  411348 command_runner.go:130] >       "size": "63273227",
	I0916 18:45:47.255265  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255272  411348 command_runner.go:130] >       "username": "nonroot",
	I0916 18:45:47.255278  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255282  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255288  411348 command_runner.go:130] >     },
	I0916 18:45:47.255290  411348 command_runner.go:130] >     {
	I0916 18:45:47.255297  411348 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 18:45:47.255302  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255307  411348 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 18:45:47.255311  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255314  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255321  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0916 18:45:47.255330  411348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 18:45:47.255333  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255338  411348 command_runner.go:130] >       "size": "149009664",
	I0916 18:45:47.255343  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255347  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255369  411348 command_runner.go:130] >       },
	I0916 18:45:47.255373  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255377  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255382  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255387  411348 command_runner.go:130] >     },
	I0916 18:45:47.255393  411348 command_runner.go:130] >     {
	I0916 18:45:47.255401  411348 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 18:45:47.255406  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255411  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 18:45:47.255415  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255420  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255431  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0916 18:45:47.255438  411348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 18:45:47.255444  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255448  411348 command_runner.go:130] >       "size": "95237600",
	I0916 18:45:47.255452  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255456  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255459  411348 command_runner.go:130] >       },
	I0916 18:45:47.255463  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255467  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255473  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255476  411348 command_runner.go:130] >     },
	I0916 18:45:47.255480  411348 command_runner.go:130] >     {
	I0916 18:45:47.255487  411348 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 18:45:47.255491  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255499  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 18:45:47.255507  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255513  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255527  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0916 18:45:47.255542  411348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0916 18:45:47.255557  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255565  411348 command_runner.go:130] >       "size": "89437508",
	I0916 18:45:47.255569  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255575  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255578  411348 command_runner.go:130] >       },
	I0916 18:45:47.255582  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255586  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255590  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255594  411348 command_runner.go:130] >     },
	I0916 18:45:47.255597  411348 command_runner.go:130] >     {
	I0916 18:45:47.255605  411348 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 18:45:47.255611  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255616  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 18:45:47.255622  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255625  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255639  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0916 18:45:47.255648  411348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0916 18:45:47.255652  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255656  411348 command_runner.go:130] >       "size": "92733849",
	I0916 18:45:47.255661  411348 command_runner.go:130] >       "uid": null,
	I0916 18:45:47.255665  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255671  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255676  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255680  411348 command_runner.go:130] >     },
	I0916 18:45:47.255684  411348 command_runner.go:130] >     {
	I0916 18:45:47.255690  411348 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 18:45:47.255695  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255700  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 18:45:47.255704  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255708  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255719  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0916 18:45:47.255732  411348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0916 18:45:47.255743  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255752  411348 command_runner.go:130] >       "size": "68420934",
	I0916 18:45:47.255762  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255768  411348 command_runner.go:130] >         "value": "0"
	I0916 18:45:47.255776  411348 command_runner.go:130] >       },
	I0916 18:45:47.255783  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255792  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255799  411348 command_runner.go:130] >       "pinned": false
	I0916 18:45:47.255807  411348 command_runner.go:130] >     },
	I0916 18:45:47.255813  411348 command_runner.go:130] >     {
	I0916 18:45:47.255820  411348 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 18:45:47.255824  411348 command_runner.go:130] >       "repoTags": [
	I0916 18:45:47.255829  411348 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 18:45:47.255832  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255836  411348 command_runner.go:130] >       "repoDigests": [
	I0916 18:45:47.255843  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0916 18:45:47.255855  411348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 18:45:47.255861  411348 command_runner.go:130] >       ],
	I0916 18:45:47.255865  411348 command_runner.go:130] >       "size": "742080",
	I0916 18:45:47.255869  411348 command_runner.go:130] >       "uid": {
	I0916 18:45:47.255873  411348 command_runner.go:130] >         "value": "65535"
	I0916 18:45:47.255876  411348 command_runner.go:130] >       },
	I0916 18:45:47.255881  411348 command_runner.go:130] >       "username": "",
	I0916 18:45:47.255884  411348 command_runner.go:130] >       "spec": null,
	I0916 18:45:47.255888  411348 command_runner.go:130] >       "pinned": true
	I0916 18:45:47.255892  411348 command_runner.go:130] >     }
	I0916 18:45:47.255895  411348 command_runner.go:130] >   ]
	I0916 18:45:47.255899  411348 command_runner.go:130] > }
	I0916 18:45:47.256640  411348 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 18:45:47.256661  411348 cache_images.go:84] Images are preloaded, skipping loading
	I0916 18:45:47.256670  411348 kubeadm.go:934] updating node { 192.168.39.90 8443 v1.31.1 crio true true} ...
	I0916 18:45:47.256811  411348 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-588591 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.90
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 18:45:47.256903  411348 ssh_runner.go:195] Run: crio config
	I0916 18:45:47.300586  411348 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0916 18:45:47.300624  411348 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0916 18:45:47.300636  411348 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0916 18:45:47.300640  411348 command_runner.go:130] > #
	I0916 18:45:47.300647  411348 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0916 18:45:47.300653  411348 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0916 18:45:47.300671  411348 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0916 18:45:47.300678  411348 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0916 18:45:47.300681  411348 command_runner.go:130] > # reload'.
	I0916 18:45:47.300687  411348 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0916 18:45:47.300697  411348 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0916 18:45:47.300706  411348 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0916 18:45:47.300716  411348 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0916 18:45:47.300722  411348 command_runner.go:130] > [crio]
	I0916 18:45:47.300731  411348 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0916 18:45:47.300741  411348 command_runner.go:130] > # containers images, in this directory.
	I0916 18:45:47.300748  411348 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0916 18:45:47.300766  411348 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0916 18:45:47.300777  411348 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0916 18:45:47.300789  411348 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0916 18:45:47.300800  411348 command_runner.go:130] > # imagestore = ""
	I0916 18:45:47.300808  411348 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0916 18:45:47.300815  411348 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0916 18:45:47.300820  411348 command_runner.go:130] > storage_driver = "overlay"
	I0916 18:45:47.300828  411348 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0916 18:45:47.300833  411348 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0916 18:45:47.300838  411348 command_runner.go:130] > storage_option = [
	I0916 18:45:47.300844  411348 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0916 18:45:47.300953  411348 command_runner.go:130] > ]
	I0916 18:45:47.300973  411348 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0916 18:45:47.300980  411348 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0916 18:45:47.301168  411348 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0916 18:45:47.301189  411348 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0916 18:45:47.301200  411348 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0916 18:45:47.301211  411348 command_runner.go:130] > # always happen on a node reboot
	I0916 18:45:47.301578  411348 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0916 18:45:47.301603  411348 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0916 18:45:47.301610  411348 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0916 18:45:47.301615  411348 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0916 18:45:47.301706  411348 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0916 18:45:47.301729  411348 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0916 18:45:47.301743  411348 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0916 18:45:47.301972  411348 command_runner.go:130] > # internal_wipe = true
	I0916 18:45:47.301986  411348 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0916 18:45:47.301992  411348 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0916 18:45:47.302306  411348 command_runner.go:130] > # internal_repair = false
	I0916 18:45:47.302329  411348 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0916 18:45:47.302341  411348 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0916 18:45:47.302352  411348 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0916 18:45:47.302557  411348 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0916 18:45:47.302579  411348 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0916 18:45:47.302585  411348 command_runner.go:130] > [crio.api]
	I0916 18:45:47.302593  411348 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0916 18:45:47.302866  411348 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0916 18:45:47.302884  411348 command_runner.go:130] > # IP address on which the stream server will listen.
	I0916 18:45:47.303097  411348 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0916 18:45:47.303118  411348 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0916 18:45:47.303127  411348 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0916 18:45:47.303345  411348 command_runner.go:130] > # stream_port = "0"
	I0916 18:45:47.303357  411348 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0916 18:45:47.303551  411348 command_runner.go:130] > # stream_enable_tls = false
	I0916 18:45:47.303561  411348 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0916 18:45:47.303895  411348 command_runner.go:130] > # stream_idle_timeout = ""
	I0916 18:45:47.303908  411348 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0916 18:45:47.303922  411348 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0916 18:45:47.303931  411348 command_runner.go:130] > # minutes.
	I0916 18:45:47.304108  411348 command_runner.go:130] > # stream_tls_cert = ""
	I0916 18:45:47.304126  411348 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0916 18:45:47.304135  411348 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0916 18:45:47.304292  411348 command_runner.go:130] > # stream_tls_key = ""
	I0916 18:45:47.304308  411348 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0916 18:45:47.304314  411348 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0916 18:45:47.304331  411348 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0916 18:45:47.304657  411348 command_runner.go:130] > # stream_tls_ca = ""
	I0916 18:45:47.304672  411348 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 18:45:47.304878  411348 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0916 18:45:47.304892  411348 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0916 18:45:47.305046  411348 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0916 18:45:47.305059  411348 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0916 18:45:47.305065  411348 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0916 18:45:47.305069  411348 command_runner.go:130] > [crio.runtime]
	I0916 18:45:47.305074  411348 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0916 18:45:47.305079  411348 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0916 18:45:47.305086  411348 command_runner.go:130] > # "nofile=1024:2048"
	I0916 18:45:47.305092  411348 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0916 18:45:47.305204  411348 command_runner.go:130] > # default_ulimits = [
	I0916 18:45:47.305618  411348 command_runner.go:130] > # ]
	I0916 18:45:47.305637  411348 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0916 18:45:47.306027  411348 command_runner.go:130] > # no_pivot = false
	I0916 18:45:47.306046  411348 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0916 18:45:47.306055  411348 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0916 18:45:47.306194  411348 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0916 18:45:47.306227  411348 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0916 18:45:47.306235  411348 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0916 18:45:47.306246  411348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 18:45:47.306253  411348 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0916 18:45:47.306259  411348 command_runner.go:130] > # Cgroup setting for conmon
	I0916 18:45:47.306269  411348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0916 18:45:47.306281  411348 command_runner.go:130] > conmon_cgroup = "pod"
	I0916 18:45:47.306295  411348 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0916 18:45:47.306306  411348 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0916 18:45:47.306319  411348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0916 18:45:47.306327  411348 command_runner.go:130] > conmon_env = [
	I0916 18:45:47.306378  411348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 18:45:47.306427  411348 command_runner.go:130] > ]
	I0916 18:45:47.306445  411348 command_runner.go:130] > # Additional environment variables to set for all the
	I0916 18:45:47.306457  411348 command_runner.go:130] > # containers. These are overridden if set in the
	I0916 18:45:47.306469  411348 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0916 18:45:47.306551  411348 command_runner.go:130] > # default_env = [
	I0916 18:45:47.306701  411348 command_runner.go:130] > # ]
	I0916 18:45:47.306717  411348 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0916 18:45:47.306728  411348 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0916 18:45:47.307067  411348 command_runner.go:130] > # selinux = false
	I0916 18:45:47.307087  411348 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0916 18:45:47.307097  411348 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0916 18:45:47.307105  411348 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0916 18:45:47.307109  411348 command_runner.go:130] > # seccomp_profile = ""
	I0916 18:45:47.307114  411348 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0916 18:45:47.307120  411348 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0916 18:45:47.307132  411348 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0916 18:45:47.307137  411348 command_runner.go:130] > # which might increase security.
	I0916 18:45:47.307143  411348 command_runner.go:130] > # This option is currently deprecated,
	I0916 18:45:47.307149  411348 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0916 18:45:47.307157  411348 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0916 18:45:47.307164  411348 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0916 18:45:47.307172  411348 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0916 18:45:47.307178  411348 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0916 18:45:47.307185  411348 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0916 18:45:47.307192  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.307205  411348 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0916 18:45:47.307214  411348 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0916 18:45:47.307219  411348 command_runner.go:130] > # the cgroup blockio controller.
	I0916 18:45:47.307228  411348 command_runner.go:130] > # blockio_config_file = ""
	I0916 18:45:47.307239  411348 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0916 18:45:47.307248  411348 command_runner.go:130] > # blockio parameters.
	I0916 18:45:47.307254  411348 command_runner.go:130] > # blockio_reload = false
	I0916 18:45:47.307266  411348 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0916 18:45:47.307276  411348 command_runner.go:130] > # irqbalance daemon.
	I0916 18:45:47.307288  411348 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0916 18:45:47.307297  411348 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0916 18:45:47.307311  411348 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0916 18:45:47.307322  411348 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0916 18:45:47.307343  411348 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0916 18:45:47.307353  411348 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0916 18:45:47.307358  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.307368  411348 command_runner.go:130] > # rdt_config_file = ""
	I0916 18:45:47.307380  411348 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0916 18:45:47.307386  411348 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0916 18:45:47.307412  411348 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0916 18:45:47.307422  411348 command_runner.go:130] > # separate_pull_cgroup = ""
	I0916 18:45:47.307433  411348 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0916 18:45:47.307442  411348 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0916 18:45:47.307450  411348 command_runner.go:130] > # will be added.
	I0916 18:45:47.307456  411348 command_runner.go:130] > # default_capabilities = [
	I0916 18:45:47.307460  411348 command_runner.go:130] > # 	"CHOWN",
	I0916 18:45:47.307464  411348 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0916 18:45:47.307468  411348 command_runner.go:130] > # 	"FSETID",
	I0916 18:45:47.307474  411348 command_runner.go:130] > # 	"FOWNER",
	I0916 18:45:47.307477  411348 command_runner.go:130] > # 	"SETGID",
	I0916 18:45:47.307481  411348 command_runner.go:130] > # 	"SETUID",
	I0916 18:45:47.307485  411348 command_runner.go:130] > # 	"SETPCAP",
	I0916 18:45:47.307489  411348 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0916 18:45:47.307494  411348 command_runner.go:130] > # 	"KILL",
	I0916 18:45:47.307497  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307504  411348 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0916 18:45:47.307513  411348 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0916 18:45:47.307518  411348 command_runner.go:130] > # add_inheritable_capabilities = false
	I0916 18:45:47.307524  411348 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0916 18:45:47.307533  411348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 18:45:47.307542  411348 command_runner.go:130] > default_sysctls = [
	I0916 18:45:47.307555  411348 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0916 18:45:47.307563  411348 command_runner.go:130] > ]
	I0916 18:45:47.307571  411348 command_runner.go:130] > # List of devices on the host that a
	I0916 18:45:47.307583  411348 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0916 18:45:47.307592  411348 command_runner.go:130] > # allowed_devices = [
	I0916 18:45:47.307598  411348 command_runner.go:130] > # 	"/dev/fuse",
	I0916 18:45:47.307607  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307614  411348 command_runner.go:130] > # List of additional devices. specified as
	I0916 18:45:47.307639  411348 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0916 18:45:47.307648  411348 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0916 18:45:47.307662  411348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0916 18:45:47.307670  411348 command_runner.go:130] > # additional_devices = [
	I0916 18:45:47.307680  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307688  411348 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0916 18:45:47.307697  411348 command_runner.go:130] > # cdi_spec_dirs = [
	I0916 18:45:47.307705  411348 command_runner.go:130] > # 	"/etc/cdi",
	I0916 18:45:47.307715  411348 command_runner.go:130] > # 	"/var/run/cdi",
	I0916 18:45:47.307720  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307734  411348 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0916 18:45:47.307744  411348 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0916 18:45:47.307754  411348 command_runner.go:130] > # Defaults to false.
	I0916 18:45:47.307762  411348 command_runner.go:130] > # device_ownership_from_security_context = false
	I0916 18:45:47.307774  411348 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0916 18:45:47.307786  411348 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0916 18:45:47.307793  411348 command_runner.go:130] > # hooks_dir = [
	I0916 18:45:47.307798  411348 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0916 18:45:47.307803  411348 command_runner.go:130] > # ]
	I0916 18:45:47.307810  411348 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0916 18:45:47.307822  411348 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0916 18:45:47.307834  411348 command_runner.go:130] > # its default mounts from the following two files:
	I0916 18:45:47.307839  411348 command_runner.go:130] > #
	I0916 18:45:47.307853  411348 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0916 18:45:47.307866  411348 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0916 18:45:47.307878  411348 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0916 18:45:47.307917  411348 command_runner.go:130] > #
	I0916 18:45:47.307949  411348 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0916 18:45:47.307963  411348 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0916 18:45:47.307977  411348 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0916 18:45:47.307989  411348 command_runner.go:130] > #      only add mounts it finds in this file.
	I0916 18:45:47.307995  411348 command_runner.go:130] > #
	I0916 18:45:47.308001  411348 command_runner.go:130] > # default_mounts_file = ""
	I0916 18:45:47.308010  411348 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0916 18:45:47.308032  411348 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0916 18:45:47.308043  411348 command_runner.go:130] > pids_limit = 1024
	I0916 18:45:47.308052  411348 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0916 18:45:47.308065  411348 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0916 18:45:47.308075  411348 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0916 18:45:47.308090  411348 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0916 18:45:47.308100  411348 command_runner.go:130] > # log_size_max = -1
	I0916 18:45:47.308111  411348 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0916 18:45:47.308125  411348 command_runner.go:130] > # log_to_journald = false
	I0916 18:45:47.308138  411348 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0916 18:45:47.308146  411348 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0916 18:45:47.308154  411348 command_runner.go:130] > # Path to directory for container attach sockets.
	I0916 18:45:47.308164  411348 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0916 18:45:47.308173  411348 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0916 18:45:47.308179  411348 command_runner.go:130] > # bind_mount_prefix = ""
	I0916 18:45:47.308185  411348 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0916 18:45:47.308193  411348 command_runner.go:130] > # read_only = false
	I0916 18:45:47.308206  411348 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0916 18:45:47.308218  411348 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0916 18:45:47.308229  411348 command_runner.go:130] > # live configuration reload.
	I0916 18:45:47.308235  411348 command_runner.go:130] > # log_level = "info"
	I0916 18:45:47.308250  411348 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0916 18:45:47.308261  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.308270  411348 command_runner.go:130] > # log_filter = ""
	I0916 18:45:47.308279  411348 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0916 18:45:47.308292  411348 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0916 18:45:47.308300  411348 command_runner.go:130] > # separated by comma.
	I0916 18:45:47.308315  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308324  411348 command_runner.go:130] > # uid_mappings = ""
	I0916 18:45:47.308334  411348 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0916 18:45:47.308350  411348 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0916 18:45:47.308357  411348 command_runner.go:130] > # separated by comma.
	I0916 18:45:47.308370  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308380  411348 command_runner.go:130] > # gid_mappings = ""
	I0916 18:45:47.308389  411348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0916 18:45:47.308401  411348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 18:45:47.308416  411348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 18:45:47.308434  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308444  411348 command_runner.go:130] > # minimum_mappable_uid = -1
	I0916 18:45:47.308455  411348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0916 18:45:47.308468  411348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0916 18:45:47.308480  411348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0916 18:45:47.308492  411348 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0916 18:45:47.308503  411348 command_runner.go:130] > # minimum_mappable_gid = -1
	I0916 18:45:47.308512  411348 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0916 18:45:47.308522  411348 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0916 18:45:47.308528  411348 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0916 18:45:47.308534  411348 command_runner.go:130] > # ctr_stop_timeout = 30
	I0916 18:45:47.308540  411348 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0916 18:45:47.308548  411348 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0916 18:45:47.308555  411348 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0916 18:45:47.308562  411348 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0916 18:45:47.308566  411348 command_runner.go:130] > drop_infra_ctr = false
	I0916 18:45:47.308579  411348 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0916 18:45:47.308591  411348 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0916 18:45:47.308603  411348 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0916 18:45:47.308614  411348 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0916 18:45:47.308629  411348 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0916 18:45:47.308641  411348 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0916 18:45:47.308654  411348 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0916 18:45:47.308663  411348 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0916 18:45:47.308669  411348 command_runner.go:130] > # shared_cpuset = ""
	I0916 18:45:47.308677  411348 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0916 18:45:47.308689  411348 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0916 18:45:47.308696  411348 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0916 18:45:47.308710  411348 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0916 18:45:47.308721  411348 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0916 18:45:47.308732  411348 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0916 18:45:47.308744  411348 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0916 18:45:47.308754  411348 command_runner.go:130] > # enable_criu_support = false
	I0916 18:45:47.308762  411348 command_runner.go:130] > # Enable/disable the generation of the container,
	I0916 18:45:47.308780  411348 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0916 18:45:47.308798  411348 command_runner.go:130] > # enable_pod_events = false
	I0916 18:45:47.308811  411348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 18:45:47.308824  411348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0916 18:45:47.308834  411348 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0916 18:45:47.308842  411348 command_runner.go:130] > # default_runtime = "runc"
	I0916 18:45:47.308852  411348 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0916 18:45:47.308876  411348 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0916 18:45:47.308896  411348 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0916 18:45:47.308910  411348 command_runner.go:130] > # creation as a file is not desired either.
	I0916 18:45:47.308935  411348 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0916 18:45:47.308947  411348 command_runner.go:130] > # the hostname is being managed dynamically.
	I0916 18:45:47.308955  411348 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0916 18:45:47.308963  411348 command_runner.go:130] > # ]
	I0916 18:45:47.308974  411348 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0916 18:45:47.308984  411348 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0916 18:45:47.308994  411348 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0916 18:45:47.309005  411348 command_runner.go:130] > # Each entry in the table should follow the format:
	I0916 18:45:47.309011  411348 command_runner.go:130] > #
	I0916 18:45:47.309020  411348 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0916 18:45:47.309030  411348 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0916 18:45:47.309068  411348 command_runner.go:130] > # runtime_type = "oci"
	I0916 18:45:47.309078  411348 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0916 18:45:47.309086  411348 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0916 18:45:47.309096  411348 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0916 18:45:47.309103  411348 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0916 18:45:47.309113  411348 command_runner.go:130] > # monitor_env = []
	I0916 18:45:47.309123  411348 command_runner.go:130] > # privileged_without_host_devices = false
	I0916 18:45:47.309132  411348 command_runner.go:130] > # allowed_annotations = []
	I0916 18:45:47.309141  411348 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0916 18:45:47.309150  411348 command_runner.go:130] > # Where:
	I0916 18:45:47.309158  411348 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0916 18:45:47.309168  411348 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0916 18:45:47.309177  411348 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0916 18:45:47.309198  411348 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0916 18:45:47.309208  411348 command_runner.go:130] > #   in $PATH.
	I0916 18:45:47.309218  411348 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0916 18:45:47.309228  411348 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0916 18:45:47.309241  411348 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0916 18:45:47.309249  411348 command_runner.go:130] > #   state.
	I0916 18:45:47.309255  411348 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0916 18:45:47.309266  411348 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0916 18:45:47.309279  411348 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0916 18:45:47.309290  411348 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0916 18:45:47.309302  411348 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0916 18:45:47.309316  411348 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0916 18:45:47.309326  411348 command_runner.go:130] > #   The currently recognized values are:
	I0916 18:45:47.309336  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0916 18:45:47.309378  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0916 18:45:47.309390  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0916 18:45:47.309403  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0916 18:45:47.309417  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0916 18:45:47.309429  411348 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0916 18:45:47.309440  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0916 18:45:47.309454  411348 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0916 18:45:47.309467  411348 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0916 18:45:47.309478  411348 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0916 18:45:47.309488  411348 command_runner.go:130] > #   deprecated option "conmon".
	I0916 18:45:47.309498  411348 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0916 18:45:47.309509  411348 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0916 18:45:47.309522  411348 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0916 18:45:47.309533  411348 command_runner.go:130] > #   should be moved to the container's cgroup
	I0916 18:45:47.309546  411348 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0916 18:45:47.309556  411348 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0916 18:45:47.309566  411348 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0916 18:45:47.309577  411348 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0916 18:45:47.309583  411348 command_runner.go:130] > #
	I0916 18:45:47.309600  411348 command_runner.go:130] > # Using the seccomp notifier feature:
	I0916 18:45:47.309609  411348 command_runner.go:130] > #
	I0916 18:45:47.309619  411348 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0916 18:45:47.309630  411348 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0916 18:45:47.309646  411348 command_runner.go:130] > #
	I0916 18:45:47.309656  411348 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0916 18:45:47.309675  411348 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0916 18:45:47.309683  411348 command_runner.go:130] > #
	I0916 18:45:47.309693  411348 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0916 18:45:47.309702  411348 command_runner.go:130] > # feature.
	I0916 18:45:47.309707  411348 command_runner.go:130] > #
	I0916 18:45:47.309720  411348 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0916 18:45:47.309732  411348 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0916 18:45:47.309743  411348 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0916 18:45:47.309753  411348 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0916 18:45:47.309765  411348 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0916 18:45:47.309773  411348 command_runner.go:130] > #
	I0916 18:45:47.309783  411348 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0916 18:45:47.309795  411348 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0916 18:45:47.309803  411348 command_runner.go:130] > #
	I0916 18:45:47.309811  411348 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0916 18:45:47.309820  411348 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0916 18:45:47.309827  411348 command_runner.go:130] > #
	I0916 18:45:47.309837  411348 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0916 18:45:47.309849  411348 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0916 18:45:47.309859  411348 command_runner.go:130] > # limitation.
	I0916 18:45:47.309866  411348 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0916 18:45:47.309877  411348 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0916 18:45:47.309884  411348 command_runner.go:130] > runtime_type = "oci"
	I0916 18:45:47.309892  411348 command_runner.go:130] > runtime_root = "/run/runc"
	I0916 18:45:47.309900  411348 command_runner.go:130] > runtime_config_path = ""
	I0916 18:45:47.309908  411348 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0916 18:45:47.309918  411348 command_runner.go:130] > monitor_cgroup = "pod"
	I0916 18:45:47.309932  411348 command_runner.go:130] > monitor_exec_cgroup = ""
	I0916 18:45:47.309941  411348 command_runner.go:130] > monitor_env = [
	I0916 18:45:47.309951  411348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0916 18:45:47.309960  411348 command_runner.go:130] > ]
	I0916 18:45:47.309967  411348 command_runner.go:130] > privileged_without_host_devices = false
	I0916 18:45:47.309980  411348 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0916 18:45:47.309991  411348 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0916 18:45:47.310004  411348 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0916 18:45:47.310015  411348 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0916 18:45:47.310029  411348 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0916 18:45:47.310041  411348 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0916 18:45:47.310061  411348 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0916 18:45:47.310077  411348 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0916 18:45:47.310090  411348 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0916 18:45:47.310106  411348 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0916 18:45:47.310114  411348 command_runner.go:130] > # Example:
	I0916 18:45:47.310123  411348 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0916 18:45:47.310134  411348 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0916 18:45:47.310141  411348 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0916 18:45:47.310153  411348 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0916 18:45:47.310162  411348 command_runner.go:130] > # cpuset = 0
	I0916 18:45:47.310170  411348 command_runner.go:130] > # cpushares = "0-1"
	I0916 18:45:47.310179  411348 command_runner.go:130] > # Where:
	I0916 18:45:47.310189  411348 command_runner.go:130] > # The workload name is workload-type.
	I0916 18:45:47.310202  411348 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0916 18:45:47.310213  411348 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0916 18:45:47.310225  411348 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0916 18:45:47.310240  411348 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0916 18:45:47.310253  411348 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0916 18:45:47.310267  411348 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0916 18:45:47.310280  411348 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0916 18:45:47.310290  411348 command_runner.go:130] > # Default value is set to true
	I0916 18:45:47.310299  411348 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0916 18:45:47.310308  411348 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0916 18:45:47.310318  411348 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0916 18:45:47.310329  411348 command_runner.go:130] > # Default value is set to 'false'
	I0916 18:45:47.310336  411348 command_runner.go:130] > # disable_hostport_mapping = false
	I0916 18:45:47.310355  411348 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0916 18:45:47.310363  411348 command_runner.go:130] > #
	I0916 18:45:47.310372  411348 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0916 18:45:47.310384  411348 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0916 18:45:47.310396  411348 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0916 18:45:47.310402  411348 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0916 18:45:47.310410  411348 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0916 18:45:47.310415  411348 command_runner.go:130] > [crio.image]
	I0916 18:45:47.310424  411348 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0916 18:45:47.310432  411348 command_runner.go:130] > # default_transport = "docker://"
	I0916 18:45:47.310445  411348 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0916 18:45:47.310455  411348 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0916 18:45:47.310461  411348 command_runner.go:130] > # global_auth_file = ""
	I0916 18:45:47.310469  411348 command_runner.go:130] > # The image used to instantiate infra containers.
	I0916 18:45:47.310477  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.310484  411348 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0916 18:45:47.310490  411348 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0916 18:45:47.310499  411348 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0916 18:45:47.310507  411348 command_runner.go:130] > # This option supports live configuration reload.
	I0916 18:45:47.310515  411348 command_runner.go:130] > # pause_image_auth_file = ""
	I0916 18:45:47.310525  411348 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0916 18:45:47.310534  411348 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0916 18:45:47.310544  411348 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0916 18:45:47.310553  411348 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0916 18:45:47.310559  411348 command_runner.go:130] > # pause_command = "/pause"
	I0916 18:45:47.310568  411348 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0916 18:45:47.310574  411348 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0916 18:45:47.310581  411348 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0916 18:45:47.310594  411348 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0916 18:45:47.310610  411348 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0916 18:45:47.310623  411348 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0916 18:45:47.310639  411348 command_runner.go:130] > # pinned_images = [
	I0916 18:45:47.310647  411348 command_runner.go:130] > # ]
	I0916 18:45:47.310660  411348 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0916 18:45:47.310672  411348 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0916 18:45:47.310680  411348 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0916 18:45:47.310690  411348 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0916 18:45:47.310703  411348 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0916 18:45:47.310712  411348 command_runner.go:130] > # signature_policy = ""
	I0916 18:45:47.310725  411348 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0916 18:45:47.310738  411348 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0916 18:45:47.310752  411348 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0916 18:45:47.310765  411348 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0916 18:45:47.310774  411348 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0916 18:45:47.310784  411348 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0916 18:45:47.310800  411348 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0916 18:45:47.310814  411348 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0916 18:45:47.310826  411348 command_runner.go:130] > # changing them here.
	I0916 18:45:47.310835  411348 command_runner.go:130] > # insecure_registries = [
	I0916 18:45:47.310843  411348 command_runner.go:130] > # ]
	I0916 18:45:47.310854  411348 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0916 18:45:47.310862  411348 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0916 18:45:47.310867  411348 command_runner.go:130] > # image_volumes = "mkdir"
	I0916 18:45:47.310877  411348 command_runner.go:130] > # Temporary directory to use for storing big files
	I0916 18:45:47.310887  411348 command_runner.go:130] > # big_files_temporary_dir = ""
	I0916 18:45:47.310897  411348 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0916 18:45:47.310906  411348 command_runner.go:130] > # CNI plugins.
	I0916 18:45:47.310916  411348 command_runner.go:130] > [crio.network]
	I0916 18:45:47.310928  411348 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0916 18:45:47.310939  411348 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0916 18:45:47.310949  411348 command_runner.go:130] > # cni_default_network = ""
	I0916 18:45:47.310959  411348 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0916 18:45:47.310969  411348 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0916 18:45:47.310982  411348 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0916 18:45:47.310994  411348 command_runner.go:130] > # plugin_dirs = [
	I0916 18:45:47.311003  411348 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0916 18:45:47.311011  411348 command_runner.go:130] > # ]
	I0916 18:45:47.311024  411348 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0916 18:45:47.311032  411348 command_runner.go:130] > [crio.metrics]
	I0916 18:45:47.311044  411348 command_runner.go:130] > # Globally enable or disable metrics support.
	I0916 18:45:47.311052  411348 command_runner.go:130] > enable_metrics = true
	I0916 18:45:47.311063  411348 command_runner.go:130] > # Specify enabled metrics collectors.
	I0916 18:45:47.311074  411348 command_runner.go:130] > # Per default all metrics are enabled.
	I0916 18:45:47.311084  411348 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0916 18:45:47.311097  411348 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0916 18:45:47.311109  411348 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0916 18:45:47.311119  411348 command_runner.go:130] > # metrics_collectors = [
	I0916 18:45:47.311129  411348 command_runner.go:130] > # 	"operations",
	I0916 18:45:47.311139  411348 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0916 18:45:47.311147  411348 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0916 18:45:47.311152  411348 command_runner.go:130] > # 	"operations_errors",
	I0916 18:45:47.311162  411348 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0916 18:45:47.311172  411348 command_runner.go:130] > # 	"image_pulls_by_name",
	I0916 18:45:47.311179  411348 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0916 18:45:47.311190  411348 command_runner.go:130] > # 	"image_pulls_failures",
	I0916 18:45:47.311200  411348 command_runner.go:130] > # 	"image_pulls_successes",
	I0916 18:45:47.311209  411348 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0916 18:45:47.311218  411348 command_runner.go:130] > # 	"image_layer_reuse",
	I0916 18:45:47.311229  411348 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0916 18:45:47.311239  411348 command_runner.go:130] > # 	"containers_oom_total",
	I0916 18:45:47.311246  411348 command_runner.go:130] > # 	"containers_oom",
	I0916 18:45:47.311252  411348 command_runner.go:130] > # 	"processes_defunct",
	I0916 18:45:47.311261  411348 command_runner.go:130] > # 	"operations_total",
	I0916 18:45:47.311272  411348 command_runner.go:130] > # 	"operations_latency_seconds",
	I0916 18:45:47.311283  411348 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0916 18:45:47.311294  411348 command_runner.go:130] > # 	"operations_errors_total",
	I0916 18:45:47.311304  411348 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0916 18:45:47.311314  411348 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0916 18:45:47.311324  411348 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0916 18:45:47.311334  411348 command_runner.go:130] > # 	"image_pulls_success_total",
	I0916 18:45:47.311345  411348 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0916 18:45:47.311354  411348 command_runner.go:130] > # 	"containers_oom_count_total",
	I0916 18:45:47.311365  411348 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0916 18:45:47.311376  411348 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0916 18:45:47.311384  411348 command_runner.go:130] > # ]
	I0916 18:45:47.311395  411348 command_runner.go:130] > # The port on which the metrics server will listen.
	I0916 18:45:47.311404  411348 command_runner.go:130] > # metrics_port = 9090
	I0916 18:45:47.311416  411348 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0916 18:45:47.311424  411348 command_runner.go:130] > # metrics_socket = ""
	I0916 18:45:47.311435  411348 command_runner.go:130] > # The certificate for the secure metrics server.
	I0916 18:45:47.311441  411348 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0916 18:45:47.311453  411348 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0916 18:45:47.311465  411348 command_runner.go:130] > # certificate on any modification event.
	I0916 18:45:47.311474  411348 command_runner.go:130] > # metrics_cert = ""
	I0916 18:45:47.311485  411348 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0916 18:45:47.311496  411348 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0916 18:45:47.311505  411348 command_runner.go:130] > # metrics_key = ""
	I0916 18:45:47.311517  411348 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0916 18:45:47.311527  411348 command_runner.go:130] > [crio.tracing]
	I0916 18:45:47.311536  411348 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0916 18:45:47.311544  411348 command_runner.go:130] > # enable_tracing = false
	I0916 18:45:47.311556  411348 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0916 18:45:47.311566  411348 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0916 18:45:47.311580  411348 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0916 18:45:47.311591  411348 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0916 18:45:47.311601  411348 command_runner.go:130] > # CRI-O NRI configuration.
	I0916 18:45:47.311610  411348 command_runner.go:130] > [crio.nri]
	I0916 18:45:47.311619  411348 command_runner.go:130] > # Globally enable or disable NRI.
	I0916 18:45:47.311629  411348 command_runner.go:130] > # enable_nri = false
	I0916 18:45:47.311638  411348 command_runner.go:130] > # NRI socket to listen on.
	I0916 18:45:47.311649  411348 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0916 18:45:47.311660  411348 command_runner.go:130] > # NRI plugin directory to use.
	I0916 18:45:47.311672  411348 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0916 18:45:47.311686  411348 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0916 18:45:47.311697  411348 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0916 18:45:47.311705  411348 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0916 18:45:47.311709  411348 command_runner.go:130] > # nri_disable_connections = false
	I0916 18:45:47.311719  411348 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0916 18:45:47.311729  411348 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0916 18:45:47.311738  411348 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0916 18:45:47.311748  411348 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0916 18:45:47.311759  411348 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0916 18:45:47.311767  411348 command_runner.go:130] > [crio.stats]
	I0916 18:45:47.311777  411348 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0916 18:45:47.311788  411348 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0916 18:45:47.311798  411348 command_runner.go:130] > # stats_collection_period = 0
	I0916 18:45:47.311825  411348 command_runner.go:130] ! time="2024-09-16 18:45:47.257996033Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0916 18:45:47.311846  411348 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0916 18:45:47.311938  411348 cni.go:84] Creating CNI manager for ""
	I0916 18:45:47.311951  411348 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 18:45:47.311961  411348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 18:45:47.311981  411348 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.90 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-588591 NodeName:multinode-588591 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.90"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.90 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 18:45:47.312132  411348 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.90
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-588591"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.90
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.90"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 18:45:47.312200  411348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 18:45:47.323063  411348 command_runner.go:130] > kubeadm
	I0916 18:45:47.323093  411348 command_runner.go:130] > kubectl
	I0916 18:45:47.323098  411348 command_runner.go:130] > kubelet
	I0916 18:45:47.323193  411348 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 18:45:47.323258  411348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 18:45:47.336229  411348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 18:45:47.353738  411348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 18:45:47.372366  411348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0916 18:45:47.390868  411348 ssh_runner.go:195] Run: grep 192.168.39.90	control-plane.minikube.internal$ /etc/hosts
	I0916 18:45:47.395261  411348 command_runner.go:130] > 192.168.39.90	control-plane.minikube.internal
	I0916 18:45:47.395345  411348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 18:45:47.534583  411348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 18:45:47.550550  411348 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591 for IP: 192.168.39.90
	I0916 18:45:47.550586  411348 certs.go:194] generating shared ca certs ...
	I0916 18:45:47.550609  411348 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 18:45:47.550781  411348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 18:45:47.550838  411348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 18:45:47.550852  411348 certs.go:256] generating profile certs ...
	I0916 18:45:47.550982  411348 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/client.key
	I0916 18:45:47.551076  411348 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.key.a0b9fd92
	I0916 18:45:47.551138  411348 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.key
	I0916 18:45:47.551154  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 18:45:47.551180  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 18:45:47.551198  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 18:45:47.551223  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 18:45:47.551242  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 18:45:47.551261  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 18:45:47.551280  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 18:45:47.551298  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 18:45:47.551432  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 18:45:47.551508  411348 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 18:45:47.551524  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 18:45:47.551559  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 18:45:47.551596  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 18:45:47.551629  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 18:45:47.551695  411348 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 18:45:47.551741  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.551765  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem -> /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.551786  411348 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.552480  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 18:45:47.579482  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 18:45:47.606015  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 18:45:47.632548  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 18:45:47.659384  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 18:45:47.684505  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 18:45:47.709886  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 18:45:47.737319  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/multinode-588591/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 18:45:47.762339  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 18:45:47.787960  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 18:45:47.814814  411348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 18:45:47.841593  411348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 18:45:47.858886  411348 ssh_runner.go:195] Run: openssl version
	I0916 18:45:47.865136  411348 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0916 18:45:47.865226  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 18:45:47.876423  411348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.881142  411348 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.881245  411348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.881309  411348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 18:45:47.887178  411348 command_runner.go:130] > b5213941
	I0916 18:45:47.887250  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 18:45:47.897299  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 18:45:47.908838  411348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.913414  411348 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.913447  411348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.913487  411348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 18:45:47.919376  411348 command_runner.go:130] > 51391683
	I0916 18:45:47.919480  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 18:45:47.929331  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 18:45:47.940126  411348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.945253  411348 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.945436  411348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.945491  411348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 18:45:47.951866  411348 command_runner.go:130] > 3ec20f2e
	I0916 18:45:47.952090  411348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 18:45:47.962372  411348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:45:47.967453  411348 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 18:45:47.967487  411348 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 18:45:47.967496  411348 command_runner.go:130] > Device: 253,1	Inode: 7337000     Links: 1
	I0916 18:45:47.967505  411348 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 18:45:47.967528  411348 command_runner.go:130] > Access: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967535  411348 command_runner.go:130] > Modify: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967543  411348 command_runner.go:130] > Change: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967550  411348 command_runner.go:130] >  Birth: 2024-09-16 18:38:57.904203808 +0000
	I0916 18:45:47.967616  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 18:45:47.973790  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.973876  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 18:45:47.979824  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.979897  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 18:45:47.985980  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.986208  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 18:45:47.992132  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.992198  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 18:45:47.997720  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:47.997789  411348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 18:45:48.003192  411348 command_runner.go:130] > Certificate will not expire
	I0916 18:45:48.003362  411348 kubeadm.go:392] StartCluster: {Name:multinode-588591 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-588591 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.90 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.195 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:45:48.003490  411348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 18:45:48.003549  411348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 18:45:48.041315  411348 command_runner.go:130] > 5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9
	I0916 18:45:48.041347  411348 command_runner.go:130] > 536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da
	I0916 18:45:48.041353  411348 command_runner.go:130] > b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c
	I0916 18:45:48.041360  411348 command_runner.go:130] > 88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1
	I0916 18:45:48.041366  411348 command_runner.go:130] > 0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4
	I0916 18:45:48.041385  411348 command_runner.go:130] > dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1
	I0916 18:45:48.041390  411348 command_runner.go:130] > 8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9
	I0916 18:45:48.041409  411348 command_runner.go:130] > 6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f
	I0916 18:45:48.043263  411348 cri.go:89] found id: "5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9"
	I0916 18:45:48.043284  411348 cri.go:89] found id: "536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da"
	I0916 18:45:48.043289  411348 cri.go:89] found id: "b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c"
	I0916 18:45:48.043295  411348 cri.go:89] found id: "88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1"
	I0916 18:45:48.043298  411348 cri.go:89] found id: "0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4"
	I0916 18:45:48.043301  411348 cri.go:89] found id: "dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1"
	I0916 18:45:48.043304  411348 cri.go:89] found id: "8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9"
	I0916 18:45:48.043307  411348 cri.go:89] found id: "6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f"
	I0916 18:45:48.043309  411348 cri.go:89] found id: ""
	I0916 18:45:48.043379  411348 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.108148084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512597108126068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d575d6a-9fbf-47a8-8637-b069f7eafe12 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.108851486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbe13220-e22a-4673-94bc-9a7a22e6f38e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.108928435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbe13220-e22a-4673-94bc-9a7a22e6f38e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.109251040Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbe13220-e22a-4673-94bc-9a7a22e6f38e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.153724195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50073ae9-0149-4a89-a6c8-2662fadeecf9 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.153815304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50073ae9-0149-4a89-a6c8-2662fadeecf9 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.155486646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dff2576e-250b-469a-bafe-5a33f1e672c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.156377882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512597156189923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dff2576e-250b-469a-bafe-5a33f1e672c1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.157047954Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0d9055c-023f-447a-a16a-9c36eb227fe4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.157122526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0d9055c-023f-447a-a16a-9c36eb227fe4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.157478952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0d9055c-023f-447a-a16a-9c36eb227fe4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.203926109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7be62c1f-038b-4ca0-b526-38a8dfde34d1 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.204054492Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7be62c1f-038b-4ca0-b526-38a8dfde34d1 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.205434259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=96ff0757-69e6-46f0-9a34-ed9dd7c5811b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.206031765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512597206008246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96ff0757-69e6-46f0-9a34-ed9dd7c5811b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.206747510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6511825-18df-4ea3-abfd-41a9748ce583 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.206884205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6511825-18df-4ea3-abfd-41a9748ce583 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.208954596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6511825-18df-4ea3-abfd-41a9748ce583 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.255102895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=005e4d53-80f7-49a3-869e-c77a05f270b7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.255179941Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=005e4d53-80f7-49a3-869e-c77a05f270b7 name=/runtime.v1.RuntimeService/Version
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.256212452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9972d974-8266-4a93-a1be-fd3f87743bb6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.256842350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512597256815802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9972d974-8266-4a93-a1be-fd3f87743bb6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.257596119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=330c1609-5294-4757-ae69-b4762875ee1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.257652846Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=330c1609-5294-4757-ae69-b4762875ee1e name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 18:49:57 multinode-588591 crio[2723]: time="2024-09-16 18:49:57.257997441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ca66d6fc74c25dd4ca89db4ee0ebcac4065cba4cd5734c9992d4881770f23a9a,PodSandboxId:9779b6ef0776399a59f8ca48b59e1c40bb983868565fe1b9c010a534f7ad07cd,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726512387862717544,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99,PodSandboxId:a2e636e98d9ba0aa69c3b1ac1b8ff968998e3fdc532db37940347d487da3ab28,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726512354343352715,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e,PodSandboxId:01f17e6951bd252b01f7be9e5e8d3e7061a1a4aa50ca44cc1a148f13573feb20,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726512354387073125,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f35e43ce6aa43df357e4c632f5afe4b96fcf5aa62aaacb93fed0ff7be4ae4,PodSandboxId:e77116881bb7e1cfea90305bfdbd6f483aaff46e7c62d9506dcc87e41706483f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726512354139280396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97,PodSandboxId:6b9acdd74ecddd060a36e3c7f9643e094c6ecf9c63625b2c1b08d84599a77c83,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726512354081451639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7-6d4f80895cac,},Annotations:map[string]string{io.ku
bernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733,PodSandboxId:db85e1e134b2e844273a19702c0c215bd723a37344ddb072362f593632972b01,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726512350340801014,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54,PodSandboxId:3b839abce4d8875163e3ca8c70b1a09d8b3bc25eeb284991fe42ee4e6a017886,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726512350306750909,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6,PodSandboxId:2efac27de3b007e9863a5de16d303e54d6f1ae20ae2650df827c148609e8ab95,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726512350284803322,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd39f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311,PodSandboxId:7e39d511fe6d83062ea66ac560fec720d0675e3d05e5c484d6e1c4452f44986a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726512350211720253,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b97d2f06c63f5cc1dd1cde6ea7342266d505bea386b3c3cad98841f4a2f4fb3,PodSandboxId:026592543c3072d9ba687309895c152627bc0eb8b6bb3b29bb9fb0d5c5321cdb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726512025034965970,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-npxwd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7ae52a62-68b2-4df8-9a32-7c101e32fc1f,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9,PodSandboxId:bfbed50ea0cf130ee49fd9870b883a72eb67fc4368b33bd9cf8b4d47a88d7f93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726511966063251295,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jl97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1ecace7-ec89-48df-ba67-9d4db464f114,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:536b0b65dd5d807fcacea7f41a6bb1cdb288c8b451e9b3f62fd85f4f2bf952da,PodSandboxId:06ea3547c2a84a2e69cb1cb88b2f6b14b91f1e84748ccd4ac79c2feac101f066,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726511966050647861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 302e9fd7-e7c3-4885-8081-870d67fa9113,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c,PodSandboxId:362e382e3e94eb6b1527ddf35b920bbd7c16f0cf43ad95f0f77cd6f4dc05b07a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726511954539418738,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pcwtq,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 302842c7-44d7-4798-8bd8-bffb298e5ae5,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1,PodSandboxId:97ebc72e5610a1030f8f12f4d8231ec97a666fc1f5d607d6556cf544d38626e7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726511954469975725,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n6hld,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d0b45a9-faa7-42f6-92b7
-6d4f80895cac,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4,PodSandboxId:16c8a9b30dd0186defd5ed3e2a6d5c1c2c8dd9fa0c1f8562b8db6ce78b37034f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726511941817490471,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f71fd3
9f8b903bb242fba68909eb6d5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1,PodSandboxId:9e23e175729255d5132e7784b658556ae7bfe844bd806a1d990a4378233b8617,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726511941788162655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 896dbad86dc3607e987f75
e8d5fb8b6d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9,PodSandboxId:c31c43bbb7835d7d9a67a73e54f7c153adf4306ae62ab317d482b2195febdc11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726511941768647375,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086ece32f2ae0140cf85d2dbbad4a779,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f,PodSandboxId:e9cbad0a06bab4fe45ae9a7dea04410a2744b925b615d75c2f2c343d1ec6b948,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726511941735381412,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-588591,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0c277e362a7aacc4d6c4b0acce86d8c,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=330c1609-5294-4757-ae69-b4762875ee1e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ca66d6fc74c25       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9779b6ef07763       busybox-7dff88458-npxwd
	c6817a2d4d16a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   01f17e6951bd2       coredns-7c65d6cfc9-jl97q
	a13d60065ee77       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   a2e636e98d9ba       kindnet-pcwtq
	ad4f35e43ce6a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   e77116881bb7e       storage-provisioner
	744df38e318c9       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   6b9acdd74ecdd       kube-proxy-n6hld
	f4ada8d8fc68c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   db85e1e134b2e       kube-scheduler-multinode-588591
	f541570364a8e       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   3b839abce4d88       kube-apiserver-multinode-588591
	e5041a44acd42       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   2efac27de3b00       kube-controller-manager-multinode-588591
	b8d55c2362a4a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   7e39d511fe6d8       etcd-multinode-588591
	5b97d2f06c63f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   026592543c307       busybox-7dff88458-npxwd
	5a75e89d7ed0f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   bfbed50ea0cf1       coredns-7c65d6cfc9-jl97q
	536b0b65dd5d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   06ea3547c2a84       storage-provisioner
	b7906688e5bed       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   362e382e3e94e       kindnet-pcwtq
	88aaa7fc69945       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   97ebc72e5610a       kube-proxy-n6hld
	0c1a836d4e499       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   16c8a9b30dd01       kube-controller-manager-multinode-588591
	dc6240b9d562f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   9e23e17572925       kube-scheduler-multinode-588591
	8ed063e308eaf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   c31c43bbb7835       etcd-multinode-588591
	6299f0d0edaa8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   e9cbad0a06bab       kube-apiserver-multinode-588591
	
	
	==> coredns [5a75e89d7ed0fa74085b2e01515e0a71c783f26deec3dc8513be0ccde74507f9] <==
	[INFO] 10.244.1.2:38361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001803811s
	[INFO] 10.244.1.2:50829 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000113376s
	[INFO] 10.244.1.2:52491 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105334s
	[INFO] 10.244.1.2:44029 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001375788s
	[INFO] 10.244.1.2:33700 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088704s
	[INFO] 10.244.1.2:57781 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009308s
	[INFO] 10.244.1.2:49707 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089767s
	[INFO] 10.244.0.3:60280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108104s
	[INFO] 10.244.0.3:43474 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165734s
	[INFO] 10.244.0.3:60941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010869s
	[INFO] 10.244.0.3:50648 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068862s
	[INFO] 10.244.1.2:49400 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150166s
	[INFO] 10.244.1.2:54278 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123973s
	[INFO] 10.244.1.2:58754 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000106786s
	[INFO] 10.244.1.2:57389 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000093872s
	[INFO] 10.244.0.3:53773 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106526s
	[INFO] 10.244.0.3:54541 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000199549s
	[INFO] 10.244.0.3:48415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000117547s
	[INFO] 10.244.0.3:50287 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000117647s
	[INFO] 10.244.1.2:33435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00023337s
	[INFO] 10.244.1.2:60165 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000186825s
	[INFO] 10.244.1.2:36645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123148s
	[INFO] 10.244.1.2:59569 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122663s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c6817a2d4d16af0fc17ce2e37f814501cb7a8c46b2bed7e973f75d31d1b1929e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38189 - 9940 "HINFO IN 6765561091576542386.6857459497422022788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016467007s
	
	
	==> describe nodes <==
	Name:               multinode-588591
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-588591
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=multinode-588591
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T18_39_08_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:39:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-588591
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:49:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 18:45:53 +0000   Mon, 16 Sep 2024 18:39:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.90
	  Hostname:    multinode-588591
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b679443c1e1a452cb3b1075c2d8ed8e1
	  System UUID:                b679443c-1e1a-452c-b3b1-075c2d8ed8e1
	  Boot ID:                    b96c48ef-4b97-44e3-8117-b11c1bef2f85
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-npxwd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kube-system                 coredns-7c65d6cfc9-jl97q                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-588591                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-pcwtq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-588591             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-588591    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-n6hld                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-588591             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)    kubelet          Node multinode-588591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)    kubelet          Node multinode-588591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)    kubelet          Node multinode-588591 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-588591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-588591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-588591 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-588591 event: Registered Node multinode-588591 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-588591 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-588591 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-588591 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-588591 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-588591 event: Registered Node multinode-588591 in Controller
	
	
	Name:               multinode-588591-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-588591-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8
	                    minikube.k8s.io/name=multinode-588591
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T18_46_33_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 18:46:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-588591-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 18:47:34 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Sep 2024 18:47:04 +0000   Mon, 16 Sep 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    multinode-588591-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92c5bfd786184ceea39937b75880871e
	  System UUID:                92c5bfd7-8618-4cee-a399-37b75880871e
	  Boot ID:                    3558c459-ff2c-49bc-8552-f64d372cec00
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pdqxd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-h69tp              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m59s
	  kube-system                 kube-proxy-vcvjk           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 9m59s)  kubelet          Node multinode-588591-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 9m59s)  kubelet          Node multinode-588591-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 9m59s)  kubelet          Node multinode-588591-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-588591-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-588591-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-588591-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-588591-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-588591-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-588591-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066874] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.195843] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.131543] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.306425] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +4.017981] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +3.705086] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.065609] kauditd_printk_skb: 158 callbacks suppressed
	[Sep16 18:39] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.091456] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.153180] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.106913] kauditd_printk_skb: 21 callbacks suppressed
	[ +13.971900] kauditd_printk_skb: 60 callbacks suppressed
	[Sep16 18:40] kauditd_printk_skb: 12 callbacks suppressed
	[Sep16 18:45] systemd-fstab-generator[2650]: Ignoring "noauto" option for root device
	[  +0.149807] systemd-fstab-generator[2662]: Ignoring "noauto" option for root device
	[  +0.188900] systemd-fstab-generator[2676]: Ignoring "noauto" option for root device
	[  +0.150344] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.283409] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +0.689940] systemd-fstab-generator[2804]: Ignoring "noauto" option for root device
	[  +1.902382] systemd-fstab-generator[2924]: Ignoring "noauto" option for root device
	[  +4.664651] kauditd_printk_skb: 184 callbacks suppressed
	[  +5.884025] kauditd_printk_skb: 34 callbacks suppressed
	[Sep16 18:46] systemd-fstab-generator[3780]: Ignoring "noauto" option for root device
	[ +19.623369] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [8ed063e308eaf0c863365311b9de33a2a2a1c7664d89bd4fda4f37bee367b5b9] <==
	{"level":"warn","ts":"2024-09-16T18:40:59.688351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"358.391584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T18:40:59.688398Z","caller":"traceutil/trace.go:171","msg":"trace[1484628912] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:637; }","duration":"358.439143ms","start":"2024-09-16T18:40:59.329953Z","end":"2024-09-16T18:40:59.688392Z","steps":["trace[1484628912] 'agreement among raft nodes before linearized reading'  (duration: 358.356329ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.688435Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.329928Z","time spent":"358.50262ms","remote":"127.0.0.1:34756","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-16T18:40:59.688605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.743719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-588591-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T18:40:59.688648Z","caller":"traceutil/trace.go:171","msg":"trace[1508024205] range","detail":"{range_begin:/registry/csinodes/multinode-588591-m03; range_end:; response_count:0; response_revision:637; }","duration":"234.787382ms","start":"2024-09-16T18:40:59.453855Z","end":"2024-09-16T18:40:59.688643Z","steps":["trace[1508024205] 'agreement among raft nodes before linearized reading'  (duration: 234.732028ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T18:40:59.688789Z","caller":"traceutil/trace.go:171","msg":"trace[1574818506] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"358.788516ms","start":"2024-09-16T18:40:59.329994Z","end":"2024-09-16T18:40:59.688783Z","steps":["trace[1574818506] 'process raft request'  (duration: 357.312773ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.689399Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.329989Z","time spent":"359.385691ms","remote":"127.0.0.1:58052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":699,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-8kssm.17f5cd920c757e30\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-8kssm.17f5cd920c757e30\" value_size:619 lease:4160642142840671290 >> failure:<>"}
	{"level":"info","ts":"2024-09-16T18:40:59.689635Z","caller":"traceutil/trace.go:171","msg":"trace[1677215337] transaction","detail":"{read_only:false; response_revision:633; number_of_response:1; }","duration":"356.667825ms","start":"2024-09-16T18:40:59.332957Z","end":"2024-09-16T18:40:59.689625Z","steps":["trace[1677215337] 'process raft request'  (duration: 354.914471ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.689726Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.332939Z","time spent":"356.765835ms","remote":"127.0.0.1:58052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":657,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kindnet.17f5cd920c9f2600\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kindnet.17f5cd920c9f2600\" value_size:586 lease:4160642142840671290 >> failure:<>"}
	{"level":"info","ts":"2024-09-16T18:40:59.689974Z","caller":"traceutil/trace.go:171","msg":"trace[1642580106] transaction","detail":"{read_only:false; response_revision:634; number_of_response:1; }","duration":"355.522411ms","start":"2024-09-16T18:40:59.334445Z","end":"2024-09-16T18:40:59.689967Z","steps":["trace[1642580106] 'process raft request'  (duration: 353.468257ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.690048Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.334429Z","time spent":"355.586868ms","remote":"127.0.0.1:58470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4708,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:526 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4660 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"info","ts":"2024-09-16T18:40:59.690225Z","caller":"traceutil/trace.go:171","msg":"trace[1405851371] transaction","detail":"{read_only:false; response_revision:635; number_of_response:1; }","duration":"354.43609ms","start":"2024-09-16T18:40:59.335781Z","end":"2024-09-16T18:40:59.690217Z","steps":["trace[1405851371] 'process raft request'  (duration: 352.172714ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.690322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.335768Z","time spent":"354.532382ms","remote":"127.0.0.1:58150","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:609 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-09-16T18:40:59.690505Z","caller":"traceutil/trace.go:171","msg":"trace[943818931] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"353.34272ms","start":"2024-09-16T18:40:59.337156Z","end":"2024-09-16T18:40:59.690499Z","steps":["trace[943818931] 'process raft request'  (duration: 350.833953ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T18:40:59.690610Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T18:40:59.337144Z","time spent":"353.443052ms","remote":"127.0.0.1:58156","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2331,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-588591-m03\" mod_revision:623 > success:<request_put:<key:\"/registry/minions/multinode-588591-m03\" value_size:2285 >> failure:<request_range:<key:\"/registry/minions/multinode-588591-m03\" > >"}
	{"level":"info","ts":"2024-09-16T18:44:14.734761Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T18:44:14.734900Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-588591","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	{"level":"warn","ts":"2024-09-16T18:44:14.735070Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:44:14.735223Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:44:14.772116Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T18:44:14.772159Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.90:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T18:44:14.773596Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8d381aaacda0b9bd","current-leader-member-id":"8d381aaacda0b9bd"}
	{"level":"info","ts":"2024-09-16T18:44:14.777152Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-16T18:44:14.777272Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-16T18:44:14.777300Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-588591","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.90:2380"],"advertise-client-urls":["https://192.168.39.90:2379"]}
	
	
	==> etcd [b8d55c2362a4a46dd4749b094c41371758c59aa9cf0e3456645f24fde7249311] <==
	{"level":"info","ts":"2024-09-16T18:45:50.587886Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.90:2380"}
	{"level":"info","ts":"2024-09-16T18:45:50.583125Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T18:45:50.587906Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T18:45:50.587916Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T18:45:50.583177Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-16T18:45:50.584130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd switched to configuration voters=(10175912678940260797)"}
	{"level":"info","ts":"2024-09-16T18:45:50.588830Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8cf3a1558a63fa9e","local-member-id":"8d381aaacda0b9bd","added-peer-id":"8d381aaacda0b9bd","added-peer-peer-urls":["https://192.168.39.90:2380"]}
	{"level":"info","ts":"2024-09-16T18:45:50.588984Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8cf3a1558a63fa9e","local-member-id":"8d381aaacda0b9bd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T18:45:50.589037Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T18:45:51.631604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T18:45:51.631727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T18:45:51.631780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd received MsgPreVoteResp from 8d381aaacda0b9bd at term 2"}
	{"level":"info","ts":"2024-09-16T18:45:51.631818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.631843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd received MsgVoteResp from 8d381aaacda0b9bd at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.631870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8d381aaacda0b9bd became leader at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.631896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8d381aaacda0b9bd elected leader 8d381aaacda0b9bd at term 3"}
	{"level":"info","ts":"2024-09-16T18:45:51.634630Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8d381aaacda0b9bd","local-member-attributes":"{Name:multinode-588591 ClientURLs:[https://192.168.39.90:2379]}","request-path":"/0/members/8d381aaacda0b9bd/attributes","cluster-id":"8cf3a1558a63fa9e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T18:45:51.634852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T18:45:51.635200Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T18:45:51.635962Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T18:45:51.636741Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T18:45:51.637356Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T18:45:51.638085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.90:2379"}
	{"level":"info","ts":"2024-09-16T18:45:51.638159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T18:45:51.638220Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:49:57 up 11 min,  0 users,  load average: 0.25, 0.31, 0.18
	Linux multinode-588591 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a13d60065ee776ce35474409dfc56cd1ee07f3d54cb4b6a86f78df853d91ac99] <==
	I0916 18:48:55.383966       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:49:05.391375       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:49:05.391618       1 main.go:299] handling current node
	I0916 18:49:05.391739       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:49:05.391771       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:49:15.384295       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:49:15.384423       1 main.go:299] handling current node
	I0916 18:49:15.384451       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:49:15.384482       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:49:25.383352       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:49:25.383438       1 main.go:299] handling current node
	I0916 18:49:25.383458       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:49:25.383464       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:49:35.390997       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:49:35.391174       1 main.go:299] handling current node
	I0916 18:49:35.391213       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:49:35.391235       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:49:45.383364       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:49:45.383418       1 main.go:299] handling current node
	I0916 18:49:45.383459       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:49:45.383465       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:49:55.384186       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:49:55.384246       1 main.go:299] handling current node
	I0916 18:49:55.384265       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:49:55.384274       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b7906688e5beda46287a0d089e14a6d6cc38b600020e2d4092c2ce0932713f3c] <==
	I0916 18:43:25.497910       1 main.go:299] handling current node
	I0916 18:43:35.499225       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:43:35.499438       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:43:35.499698       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:43:35.499733       1 main.go:299] handling current node
	I0916 18:43:35.499770       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:43:35.499789       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:43:45.500021       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:43:45.500162       1 main.go:299] handling current node
	I0916 18:43:45.500253       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:43:45.500343       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:43:45.500654       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:43:45.500689       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:43:55.491282       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:43:55.491401       1 main.go:299] handling current node
	I0916 18:43:55.491443       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:43:55.491465       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:43:55.491702       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:43:55.491736       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:44:05.492939       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0916 18:44:05.493087       1 main.go:322] Node multinode-588591-m02 has CIDR [10.244.1.0/24] 
	I0916 18:44:05.493289       1 main.go:295] Handling node with IPs: map[192.168.39.195:{}]
	I0916 18:44:05.493334       1 main.go:322] Node multinode-588591-m03 has CIDR [10.244.4.0/24] 
	I0916 18:44:05.493410       1 main.go:295] Handling node with IPs: map[192.168.39.90:{}]
	I0916 18:44:05.493430       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6299f0d0edaa82424fd09140d589c6078dc9082ba0bf6b8074472161ff5ebf7f] <==
	W0916 18:44:14.754100       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754121       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754141       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754159       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754179       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754200       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754219       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754237       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.754257       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764062       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764087       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764106       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764125       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764145       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764163       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764181       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764199       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764220       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764239       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764260       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764278       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764297       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764316       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764343       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0916 18:44:14.764365       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f541570364a8e8d307afb93f2c63ecf263ede891b46606abefd6ecf436588f54] <==
	I0916 18:45:53.095859       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 18:45:53.095908       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 18:45:53.101162       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 18:45:53.101374       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 18:45:53.101447       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 18:45:53.101641       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 18:45:53.106441       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 18:45:53.106837       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 18:45:53.114980       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 18:45:53.115014       1 policy_source.go:224] refreshing policies
	I0916 18:45:53.118817       1 aggregator.go:171] initial CRD sync complete...
	I0916 18:45:53.118846       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 18:45:53.118852       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 18:45:53.118858       1 cache.go:39] Caches are synced for autoregister controller
	I0916 18:45:53.122418       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0916 18:45:53.128356       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 18:45:53.200257       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 18:45:54.006497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 18:45:55.642612       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 18:45:55.909915       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 18:45:55.921869       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 18:45:56.008025       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 18:45:56.016810       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 18:45:56.837367       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 18:45:56.887644       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0c1a836d4e4990644a538c9c533ecfed8752ead0077066917c96d357e36645b4] <==
	I0916 18:41:47.834440       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.031349       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-588591-m03\" does not exist"
	I0916 18:41:49.032706       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:41:49.044248       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-588591-m03" podCIDRs=["10.244.4.0/24"]
	I0916 18:41:49.044705       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.045025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.062141       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.067691       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.362874       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:49.713089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:51.504042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:41:59.171072       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:09.123809       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:42:09.124049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:09.137901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:11.467801       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:51.484223       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:51.484577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:42:51.502320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:56.518244       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:42:56.532132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:42:56.554688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:42:56.573920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.532187ms"
	I0916 18:42:56.574643       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.697µs"
	I0916 18:43:06.637397       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	
	
	==> kube-controller-manager [e5041a44acd424b7952fa86aba7b6c2b0472bee3054d2126b55badc4fe616ac6] <==
	E0916 18:47:12.289361       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-588591-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-588591-m03"
	E0916 18:47:12.289412       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-588591-m03': failed to patch node CIDR: Node \"multinode-588591-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0916 18:47:12.289434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:12.294777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:12.511336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:12.861195       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:16.652447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:22.398370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:30.813019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:30.813091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:47:30.825300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:31.574963       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:35.582509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:35.611181       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:36.043395       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m03"
	I0916 18:47:36.043584       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-588591-m02"
	I0916 18:48:16.461920       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z7bdt"
	I0916 18:48:16.494583       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-z7bdt"
	I0916 18:48:16.494840       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8kssm"
	I0916 18:48:16.527268       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8kssm"
	I0916 18:48:16.594590       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:48:16.613016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	I0916 18:48:16.644867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.209588ms"
	I0916 18:48:16.645382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.734µs"
	I0916 18:48:21.675280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-588591-m02"
	
	
	==> kube-proxy [744df38e318c9f030aa23403acadbf5c658174ead2f8ab990552edab72c1da97] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:45:54.567608       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 18:45:54.578731       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.90"]
	E0916 18:45:54.579054       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:45:54.666269       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:45:54.666359       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:45:54.666399       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:45:54.673960       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:45:54.675847       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:45:54.676215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:45:54.688222       1 config.go:199] "Starting service config controller"
	I0916 18:45:54.689886       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:45:54.697403       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:45:54.697480       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:45:54.709792       1 config.go:328] "Starting node config controller"
	I0916 18:45:54.711847       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:45:54.796476       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:45:54.798342       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 18:45:54.812078       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [88aaa7fc699451c1f29078bfdf364d47c7f16f98b58d178310cd08b82fde82f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 18:39:14.672200       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 18:39:14.682030       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.90"]
	E0916 18:39:14.683634       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 18:39:14.747729       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 18:39:14.747770       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 18:39:14.747793       1 server_linux.go:169] "Using iptables Proxier"
	I0916 18:39:14.750415       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 18:39:14.750920       1 server.go:483] "Version info" version="v1.31.1"
	I0916 18:39:14.750966       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:39:14.752233       1 config.go:199] "Starting service config controller"
	I0916 18:39:14.752326       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 18:39:14.752426       1 config.go:105] "Starting endpoint slice config controller"
	I0916 18:39:14.752450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 18:39:14.753092       1 config.go:328] "Starting node config controller"
	I0916 18:39:14.754756       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 18:39:14.852730       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 18:39:14.852794       1 shared_informer.go:320] Caches are synced for service config
	I0916 18:39:14.854963       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dc6240b9d562f98833ff052ca408f7085d4d080c0323ceb63df91a88d24821d1] <==
	E0916 18:39:04.503392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:04.501478       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 18:39:04.503444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.303749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 18:39:05.303779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.344431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 18:39:05.344488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.351839       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 18:39:05.351888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.443594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 18:39:05.443738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.508403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 18:39:05.508455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.602832       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 18:39:05.602880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.615661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 18:39:05.615713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.615781       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 18:39:05.615792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.764096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 18:39:05.764145       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 18:39:05.961645       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 18:39:05.961770       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 18:39:08.377594       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 18:44:14.748929       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f4ada8d8fc68c79c0a8ea12f14854035b077cf91ab69653c679878dcb4ece733] <==
	I0916 18:45:51.446869       1 serving.go:386] Generated self-signed cert in-memory
	W0916 18:45:53.046455       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 18:45:53.046565       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 18:45:53.047790       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 18:45:53.048633       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 18:45:53.121678       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 18:45:53.122785       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 18:45:53.128338       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 18:45:53.128600       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 18:45:53.129321       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 18:45:53.129698       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 18:45:53.230112       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 18:48:39 multinode-588591 kubelet[2931]: E0916 18:48:39.673942    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512519673133034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:48:49 multinode-588591 kubelet[2931]: E0916 18:48:49.628030    2931 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:48:49 multinode-588591 kubelet[2931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:48:49 multinode-588591 kubelet[2931]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:48:49 multinode-588591 kubelet[2931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:48:49 multinode-588591 kubelet[2931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:48:49 multinode-588591 kubelet[2931]: E0916 18:48:49.675411    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512529675221125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:48:49 multinode-588591 kubelet[2931]: E0916 18:48:49.675433    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512529675221125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:48:59 multinode-588591 kubelet[2931]: E0916 18:48:59.676683    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512539676248351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:48:59 multinode-588591 kubelet[2931]: E0916 18:48:59.676727    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512539676248351,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:09 multinode-588591 kubelet[2931]: E0916 18:49:09.678601    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512549678230472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:09 multinode-588591 kubelet[2931]: E0916 18:49:09.679040    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512549678230472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:19 multinode-588591 kubelet[2931]: E0916 18:49:19.680721    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512559680224711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:19 multinode-588591 kubelet[2931]: E0916 18:49:19.680745    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512559680224711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:29 multinode-588591 kubelet[2931]: E0916 18:49:29.682363    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512569681890322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:29 multinode-588591 kubelet[2931]: E0916 18:49:29.683581    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512569681890322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:39 multinode-588591 kubelet[2931]: E0916 18:49:39.687970    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512579687642787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:39 multinode-588591 kubelet[2931]: E0916 18:49:39.688021    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512579687642787,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:49 multinode-588591 kubelet[2931]: E0916 18:49:49.629363    2931 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 16 18:49:49 multinode-588591 kubelet[2931]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 16 18:49:49 multinode-588591 kubelet[2931]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 16 18:49:49 multinode-588591 kubelet[2931]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 16 18:49:49 multinode-588591 kubelet[2931]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 16 18:49:49 multinode-588591 kubelet[2931]: E0916 18:49:49.690020    2931 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512589689088559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 16 18:49:49 multinode-588591 kubelet[2931]: E0916 18:49:49.690139    2931 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726512589689088559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 18:49:56.826915  413287 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19649-371203/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-588591 -n multinode-588591
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-588591 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.50s)

                                                
                                    
x
+
TestPreload (273.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-742808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-742808 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.790268229s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-742808 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-742808 image pull gcr.io/k8s-minikube/busybox: (3.393860595s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-742808
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-742808: exit status 82 (2m0.469676125s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-742808"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-742808 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-16 18:58:23.079039357 +0000 UTC m=+5643.792510126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-742808 -n test-preload-742808
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-742808 -n test-preload-742808: exit status 3 (18.446669079s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 18:58:41.521278  416292 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host
	E0916 18:58:41.521304  416292 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.219:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-742808" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-742808" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-742808
--- FAIL: TestPreload (273.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (416.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m56.909645515s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-698346] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-698346" primary control-plane node in "kubernetes-upgrade-698346" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 19:00:37.517082  417385 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:00:37.517403  417385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:00:37.517415  417385 out.go:358] Setting ErrFile to fd 2...
	I0916 19:00:37.517422  417385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:00:37.517642  417385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 19:00:37.519112  417385 out.go:352] Setting JSON to false
	I0916 19:00:37.520186  417385 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":9781,"bootTime":1726503457,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 19:00:37.520251  417385 start.go:139] virtualization: kvm guest
	I0916 19:00:37.522722  417385 out.go:177] * [kubernetes-upgrade-698346] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 19:00:37.524339  417385 notify.go:220] Checking for updates...
	I0916 19:00:37.525413  417385 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:00:37.528058  417385 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:00:37.530638  417385 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 19:00:37.533319  417385 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 19:00:37.535868  417385 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 19:00:37.538629  417385 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:00:37.540342  417385 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:00:37.580302  417385 out.go:177] * Using the kvm2 driver based on user configuration
	I0916 19:00:37.582624  417385 start.go:297] selected driver: kvm2
	I0916 19:00:37.582646  417385 start.go:901] validating driver "kvm2" against <nil>
	I0916 19:00:37.582659  417385 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:00:37.583325  417385 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 19:00:37.600063  417385 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 19:00:37.618416  417385 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 19:00:37.618484  417385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 19:00:37.618830  417385 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 19:00:37.618877  417385 cni.go:84] Creating CNI manager for ""
	I0916 19:00:37.618943  417385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:00:37.618961  417385 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 19:00:37.619051  417385 start.go:340] cluster config:
	{Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:00:37.619224  417385 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 19:00:37.621306  417385 out.go:177] * Starting "kubernetes-upgrade-698346" primary control-plane node in "kubernetes-upgrade-698346" cluster
	I0916 19:00:37.622683  417385 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 19:00:37.622726  417385 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 19:00:37.622736  417385 cache.go:56] Caching tarball of preloaded images
	I0916 19:00:37.622826  417385 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 19:00:37.622839  417385 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0916 19:00:37.623264  417385 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/config.json ...
	I0916 19:00:37.623299  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/config.json: {Name:mk7dc7276f42940bcd31b98de45946900ee7d36b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:00:37.623487  417385 start.go:360] acquireMachinesLock for kubernetes-upgrade-698346: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 19:01:02.254313  417385 start.go:364] duration metric: took 24.630781715s to acquireMachinesLock for "kubernetes-upgrade-698346"
	I0916 19:01:02.254386  417385 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 19:01:02.254591  417385 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 19:01:02.257094  417385 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 19:01:02.257426  417385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 19:01:02.257563  417385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 19:01:02.275574  417385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I0916 19:01:02.276037  417385 main.go:141] libmachine: () Calling .GetVersion
	I0916 19:01:02.276679  417385 main.go:141] libmachine: Using API Version  1
	I0916 19:01:02.276701  417385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 19:01:02.277056  417385 main.go:141] libmachine: () Calling .GetMachineName
	I0916 19:01:02.277235  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:01:02.277397  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:02.277557  417385 start.go:159] libmachine.API.Create for "kubernetes-upgrade-698346" (driver="kvm2")
	I0916 19:01:02.277593  417385 client.go:168] LocalClient.Create starting
	I0916 19:01:02.277631  417385 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 19:01:02.277672  417385 main.go:141] libmachine: Decoding PEM data...
	I0916 19:01:02.277701  417385 main.go:141] libmachine: Parsing certificate...
	I0916 19:01:02.277815  417385 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 19:01:02.277846  417385 main.go:141] libmachine: Decoding PEM data...
	I0916 19:01:02.277862  417385 main.go:141] libmachine: Parsing certificate...
	I0916 19:01:02.277891  417385 main.go:141] libmachine: Running pre-create checks...
	I0916 19:01:02.277903  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .PreCreateCheck
	I0916 19:01:02.278233  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetConfigRaw
	I0916 19:01:02.278612  417385 main.go:141] libmachine: Creating machine...
	I0916 19:01:02.278627  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .Create
	I0916 19:01:02.278794  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Creating KVM machine...
	I0916 19:01:02.280227  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found existing default KVM network
	I0916 19:01:02.281516  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:02.281301  417709 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:84:58} reservation:<nil>}
	I0916 19:01:02.282229  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:02.282154  417709 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a50}
	I0916 19:01:02.282306  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | created network xml: 
	I0916 19:01:02.282347  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | <network>
	I0916 19:01:02.282362  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |   <name>mk-kubernetes-upgrade-698346</name>
	I0916 19:01:02.282369  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |   <dns enable='no'/>
	I0916 19:01:02.282377  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |   
	I0916 19:01:02.282385  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0916 19:01:02.282398  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |     <dhcp>
	I0916 19:01:02.282407  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0916 19:01:02.282416  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |     </dhcp>
	I0916 19:01:02.282421  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |   </ip>
	I0916 19:01:02.282428  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG |   
	I0916 19:01:02.282443  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | </network>
	I0916 19:01:02.282454  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | 
	I0916 19:01:02.288046  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | trying to create private KVM network mk-kubernetes-upgrade-698346 192.168.50.0/24...
	I0916 19:01:02.368950  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | private KVM network mk-kubernetes-upgrade-698346 192.168.50.0/24 created
	I0916 19:01:02.368981  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:02.368873  417709 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 19:01:02.368993  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346 ...
	I0916 19:01:02.369008  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 19:01:02.369183  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 19:01:02.644041  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:02.643931  417709 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa...
	I0916 19:01:03.061540  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:03.061362  417709 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/kubernetes-upgrade-698346.rawdisk...
	I0916 19:01:03.061583  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Writing magic tar header
	I0916 19:01:03.061666  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Writing SSH key tar header
	I0916 19:01:03.061685  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:03.061476  417709 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346 ...
	I0916 19:01:03.061699  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346 (perms=drwx------)
	I0916 19:01:03.061741  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 19:01:03.061774  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 19:01:03.061792  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346
	I0916 19:01:03.061807  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 19:01:03.061820  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 19:01:03.061839  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 19:01:03.061853  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 19:01:03.061870  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 19:01:03.061886  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home/jenkins
	I0916 19:01:03.061896  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 19:01:03.061908  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 19:01:03.061918  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Creating domain...
	I0916 19:01:03.061930  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Checking permissions on dir: /home
	I0916 19:01:03.061940  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Skipping /home - not owner
	I0916 19:01:03.062967  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) define libvirt domain using xml: 
	I0916 19:01:03.062989  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) <domain type='kvm'>
	I0916 19:01:03.063002  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <name>kubernetes-upgrade-698346</name>
	I0916 19:01:03.063010  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <memory unit='MiB'>2200</memory>
	I0916 19:01:03.063021  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <vcpu>2</vcpu>
	I0916 19:01:03.063028  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <features>
	I0916 19:01:03.063036  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <acpi/>
	I0916 19:01:03.063043  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <apic/>
	I0916 19:01:03.063070  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <pae/>
	I0916 19:01:03.063084  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     
	I0916 19:01:03.063093  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   </features>
	I0916 19:01:03.063105  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <cpu mode='host-passthrough'>
	I0916 19:01:03.063116  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   
	I0916 19:01:03.063123  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   </cpu>
	I0916 19:01:03.063132  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <os>
	I0916 19:01:03.063146  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <type>hvm</type>
	I0916 19:01:03.063158  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <boot dev='cdrom'/>
	I0916 19:01:03.063166  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <boot dev='hd'/>
	I0916 19:01:03.063185  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <bootmenu enable='no'/>
	I0916 19:01:03.063195  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   </os>
	I0916 19:01:03.063204  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   <devices>
	I0916 19:01:03.063228  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <disk type='file' device='cdrom'>
	I0916 19:01:03.063268  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/boot2docker.iso'/>
	I0916 19:01:03.063290  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <target dev='hdc' bus='scsi'/>
	I0916 19:01:03.063304  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <readonly/>
	I0916 19:01:03.063314  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </disk>
	I0916 19:01:03.063350  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <disk type='file' device='disk'>
	I0916 19:01:03.063374  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 19:01:03.063398  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/kubernetes-upgrade-698346.rawdisk'/>
	I0916 19:01:03.063409  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <target dev='hda' bus='virtio'/>
	I0916 19:01:03.063421  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </disk>
	I0916 19:01:03.063431  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <interface type='network'>
	I0916 19:01:03.063442  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <source network='mk-kubernetes-upgrade-698346'/>
	I0916 19:01:03.063456  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <model type='virtio'/>
	I0916 19:01:03.063468  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </interface>
	I0916 19:01:03.063478  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <interface type='network'>
	I0916 19:01:03.063487  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <source network='default'/>
	I0916 19:01:03.063497  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <model type='virtio'/>
	I0916 19:01:03.063506  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </interface>
	I0916 19:01:03.063516  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <serial type='pty'>
	I0916 19:01:03.063525  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <target port='0'/>
	I0916 19:01:03.063538  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </serial>
	I0916 19:01:03.063550  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <console type='pty'>
	I0916 19:01:03.063576  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <target type='serial' port='0'/>
	I0916 19:01:03.063587  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </console>
	I0916 19:01:03.063594  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     <rng model='virtio'>
	I0916 19:01:03.063607  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)       <backend model='random'>/dev/random</backend>
	I0916 19:01:03.063621  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     </rng>
	I0916 19:01:03.063629  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     
	I0916 19:01:03.063639  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)     
	I0916 19:01:03.063647  417385 main.go:141] libmachine: (kubernetes-upgrade-698346)   </devices>
	I0916 19:01:03.063658  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) </domain>
	I0916 19:01:03.063669  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) 
	I0916 19:01:03.068415  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:5d:52:96 in network default
	I0916 19:01:03.069182  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Ensuring networks are active...
	I0916 19:01:03.069208  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:03.069985  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Ensuring network default is active
	I0916 19:01:03.070449  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Ensuring network mk-kubernetes-upgrade-698346 is active
	I0916 19:01:03.071079  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Getting domain xml...
	I0916 19:01:03.072022  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Creating domain...
	I0916 19:01:04.497033  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Waiting to get IP...
	I0916 19:01:04.498152  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:04.499042  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:04.499114  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:04.499036  417709 retry.go:31] will retry after 277.676362ms: waiting for machine to come up
	I0916 19:01:04.778731  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:04.779359  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:04.779390  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:04.779270  417709 retry.go:31] will retry after 343.792986ms: waiting for machine to come up
	I0916 19:01:05.124839  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:05.125334  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:05.125366  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:05.125281  417709 retry.go:31] will retry after 376.248144ms: waiting for machine to come up
	I0916 19:01:05.502971  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:05.503427  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:05.503453  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:05.503360  417709 retry.go:31] will retry after 443.219891ms: waiting for machine to come up
	I0916 19:01:05.948704  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:05.949184  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:05.949208  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:05.949156  417709 retry.go:31] will retry after 557.966666ms: waiting for machine to come up
	I0916 19:01:06.509042  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:06.509540  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:06.509567  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:06.509478  417709 retry.go:31] will retry after 638.78519ms: waiting for machine to come up
	I0916 19:01:07.150240  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:07.150609  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:07.150662  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:07.150560  417709 retry.go:31] will retry after 806.249788ms: waiting for machine to come up
	I0916 19:01:07.958083  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:07.958529  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:07.958556  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:07.958495  417709 retry.go:31] will retry after 1.487118397s: waiting for machine to come up
	I0916 19:01:09.447390  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:09.447877  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:09.447900  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:09.447830  417709 retry.go:31] will retry after 1.546536102s: waiting for machine to come up
	I0916 19:01:10.995685  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:10.996044  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:10.996072  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:10.995996  417709 retry.go:31] will retry after 1.909047474s: waiting for machine to come up
	I0916 19:01:12.907457  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:12.907975  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:12.908004  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:12.907911  417709 retry.go:31] will retry after 2.336429922s: waiting for machine to come up
	I0916 19:01:15.246221  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:15.246657  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:15.246715  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:15.246621  417709 retry.go:31] will retry after 2.508467111s: waiting for machine to come up
	I0916 19:01:17.756251  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:17.756817  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:17.756847  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:17.756744  417709 retry.go:31] will retry after 4.299766087s: waiting for machine to come up
	I0916 19:01:22.061443  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:22.061915  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find current IP address of domain kubernetes-upgrade-698346 in network mk-kubernetes-upgrade-698346
	I0916 19:01:22.061936  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | I0916 19:01:22.061865  417709 retry.go:31] will retry after 3.664964298s: waiting for machine to come up
	I0916 19:01:25.731020  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.731575  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Found IP for machine: 192.168.50.23
	I0916 19:01:25.731602  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has current primary IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.731609  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Reserving static IP address...
	I0916 19:01:25.731920  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-698346", mac: "52:54:00:fe:2a:df", ip: "192.168.50.23"} in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.811080  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Reserved static IP address: 192.168.50.23
	I0916 19:01:25.811112  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Getting to WaitForSSH function...
	I0916 19:01:25.811122  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Waiting for SSH to be available...
	I0916 19:01:25.814244  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.814791  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:25.814825  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.815017  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Using SSH client type: external
	I0916 19:01:25.815048  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa (-rw-------)
	I0916 19:01:25.815098  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 19:01:25.815120  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | About to run SSH command:
	I0916 19:01:25.815134  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | exit 0
	I0916 19:01:25.946057  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | SSH cmd err, output: <nil>: 
	I0916 19:01:25.946364  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) KVM machine creation complete!
	I0916 19:01:25.946737  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetConfigRaw
	I0916 19:01:25.947365  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:25.947558  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:25.947774  417385 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 19:01:25.947789  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetState
	I0916 19:01:25.949278  417385 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 19:01:25.949291  417385 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 19:01:25.949297  417385 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 19:01:25.949303  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:25.951873  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.952309  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:25.952336  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:25.952550  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:25.952777  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:25.952990  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:25.953156  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:25.953314  417385 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:25.953563  417385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:01:25.953579  417385 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 19:01:26.064500  417385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:01:26.064531  417385 main.go:141] libmachine: Detecting the provisioner...
	I0916 19:01:26.064544  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:26.067425  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.067799  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.067833  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.068074  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:26.068319  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.068499  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.068668  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:26.068851  417385 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:26.069077  417385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:01:26.069091  417385 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 19:01:26.186023  417385 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 19:01:26.186112  417385 main.go:141] libmachine: found compatible host: buildroot
	I0916 19:01:26.186120  417385 main.go:141] libmachine: Provisioning with buildroot...
	I0916 19:01:26.186128  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:01:26.186382  417385 buildroot.go:166] provisioning hostname "kubernetes-upgrade-698346"
	I0916 19:01:26.186411  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:01:26.186653  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:26.189488  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.189924  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.189954  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.190099  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:26.190295  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.190536  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.190684  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:26.190838  417385 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:26.191027  417385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:01:26.191038  417385 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-698346 && echo "kubernetes-upgrade-698346" | sudo tee /etc/hostname
	I0916 19:01:26.321902  417385 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-698346
	
	I0916 19:01:26.321935  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:26.325146  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.325570  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.325628  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.325823  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:26.326056  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.326227  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.326383  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:26.326537  417385 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:26.326758  417385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:01:26.326779  417385 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-698346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-698346/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-698346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 19:01:26.451597  417385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:01:26.451633  417385 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 19:01:26.451668  417385 buildroot.go:174] setting up certificates
	I0916 19:01:26.451680  417385 provision.go:84] configureAuth start
	I0916 19:01:26.451698  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:01:26.451993  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:01:26.454626  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.455108  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.455138  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.455260  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:26.457779  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.458133  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.458176  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.458346  417385 provision.go:143] copyHostCerts
	I0916 19:01:26.458426  417385 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 19:01:26.458437  417385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 19:01:26.458494  417385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 19:01:26.458615  417385 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 19:01:26.458624  417385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 19:01:26.458644  417385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 19:01:26.458710  417385 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 19:01:26.458717  417385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 19:01:26.458734  417385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 19:01:26.458794  417385 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-698346 san=[127.0.0.1 192.168.50.23 kubernetes-upgrade-698346 localhost minikube]
	I0916 19:01:26.749946  417385 provision.go:177] copyRemoteCerts
	I0916 19:01:26.750070  417385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 19:01:26.750111  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:26.752745  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.753086  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.753116  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.753267  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:26.753505  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.753692  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:26.753825  417385 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:01:26.840787  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 19:01:26.868360  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0916 19:01:26.921548  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 19:01:26.948625  417385 provision.go:87] duration metric: took 496.921157ms to configureAuth
	I0916 19:01:26.948665  417385 buildroot.go:189] setting minikube options for container-runtime
	I0916 19:01:26.948874  417385 config.go:182] Loaded profile config "kubernetes-upgrade-698346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 19:01:26.949008  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:26.951672  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.952050  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:26.952084  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:26.952251  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:26.952468  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.952641  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:26.952739  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:26.952888  417385 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:26.953097  417385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:01:26.953113  417385 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 19:01:27.180597  417385 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 19:01:27.180633  417385 main.go:141] libmachine: Checking connection to Docker...
	I0916 19:01:27.180645  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetURL
	I0916 19:01:27.182086  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | Using libvirt version 6000000
	I0916 19:01:27.184122  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.184538  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.184574  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.184685  417385 main.go:141] libmachine: Docker is up and running!
	I0916 19:01:27.184707  417385 main.go:141] libmachine: Reticulating splines...
	I0916 19:01:27.184722  417385 client.go:171] duration metric: took 24.907110972s to LocalClient.Create
	I0916 19:01:27.184749  417385 start.go:167] duration metric: took 24.907194455s to libmachine.API.Create "kubernetes-upgrade-698346"
	I0916 19:01:27.184763  417385 start.go:293] postStartSetup for "kubernetes-upgrade-698346" (driver="kvm2")
	I0916 19:01:27.184777  417385 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 19:01:27.184801  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:27.185078  417385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 19:01:27.185105  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:27.187662  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.188326  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.188374  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.188525  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:27.188711  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:27.188856  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:27.188985  417385 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:01:27.275968  417385 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 19:01:27.280968  417385 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 19:01:27.281005  417385 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 19:01:27.281077  417385 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 19:01:27.281170  417385 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 19:01:27.281269  417385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 19:01:27.292135  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:01:27.320304  417385 start.go:296] duration metric: took 135.524861ms for postStartSetup
	I0916 19:01:27.320380  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetConfigRaw
	I0916 19:01:27.321116  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:01:27.323764  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.324109  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.324145  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.324413  417385 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/config.json ...
	I0916 19:01:27.324630  417385 start.go:128] duration metric: took 25.070024131s to createHost
	I0916 19:01:27.324655  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:27.327399  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.327734  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.327771  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.327966  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:27.328185  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:27.328404  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:27.328572  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:27.328747  417385 main.go:141] libmachine: Using SSH client type: native
	I0916 19:01:27.328962  417385 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:01:27.328974  417385 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 19:01:27.438095  417385 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726513287.414195954
	
	I0916 19:01:27.438125  417385 fix.go:216] guest clock: 1726513287.414195954
	I0916 19:01:27.438134  417385 fix.go:229] Guest: 2024-09-16 19:01:27.414195954 +0000 UTC Remote: 2024-09-16 19:01:27.324643169 +0000 UTC m=+49.852788714 (delta=89.552785ms)
	I0916 19:01:27.438160  417385 fix.go:200] guest clock delta is within tolerance: 89.552785ms
	I0916 19:01:27.438168  417385 start.go:83] releasing machines lock for "kubernetes-upgrade-698346", held for 25.183820596s
	I0916 19:01:27.438204  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:27.438529  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:01:27.441838  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.442238  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.442269  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.442537  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:27.443242  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:27.443455  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:01:27.443562  417385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 19:01:27.443628  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:27.443669  417385 ssh_runner.go:195] Run: cat /version.json
	I0916 19:01:27.443696  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:01:27.446529  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.446913  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.446955  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.446982  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.447141  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:27.447338  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:27.447385  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:27.447416  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:27.447498  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:27.447570  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:01:27.447653  417385 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:01:27.447750  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:01:27.447913  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:01:27.448047  417385 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:01:27.556890  417385 ssh_runner.go:195] Run: systemctl --version
	I0916 19:01:27.564768  417385 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 19:01:27.743619  417385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 19:01:27.750125  417385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 19:01:27.750205  417385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 19:01:27.769224  417385 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 19:01:27.769252  417385 start.go:495] detecting cgroup driver to use...
	I0916 19:01:27.769316  417385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 19:01:27.787555  417385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 19:01:27.802358  417385 docker.go:217] disabling cri-docker service (if available) ...
	I0916 19:01:27.802441  417385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 19:01:27.817181  417385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 19:01:27.834349  417385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 19:01:27.962525  417385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 19:01:28.134270  417385 docker.go:233] disabling docker service ...
	I0916 19:01:28.134336  417385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 19:01:28.149652  417385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 19:01:28.163397  417385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 19:01:28.280569  417385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 19:01:28.408521  417385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 19:01:28.425166  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:01:28.445651  417385 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 19:01:28.445719  417385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:01:28.456730  417385 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 19:01:28.456804  417385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:01:28.467917  417385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:01:28.478806  417385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:01:28.489428  417385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 19:01:28.500700  417385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 19:01:28.510517  417385 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 19:01:28.510581  417385 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 19:01:28.525165  417385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 19:01:28.535123  417385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:28.657508  417385 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 19:01:28.764796  417385 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 19:01:28.764904  417385 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 19:01:28.770753  417385 start.go:563] Will wait 60s for crictl version
	I0916 19:01:28.770822  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:28.774697  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 19:01:28.817314  417385 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 19:01:28.817437  417385 ssh_runner.go:195] Run: crio --version
	I0916 19:01:28.848090  417385 ssh_runner.go:195] Run: crio --version
	I0916 19:01:28.881713  417385 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0916 19:01:28.883378  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:01:28.889204  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:28.889806  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:01:17 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:01:28.889846  417385 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:01:28.890228  417385 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0916 19:01:28.894861  417385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:01:28.908443  417385 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 19:01:28.908560  417385 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 19:01:28.908620  417385 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:01:28.946706  417385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 19:01:28.946776  417385 ssh_runner.go:195] Run: which lz4
	I0916 19:01:28.951194  417385 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 19:01:28.955500  417385 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 19:01:28.955534  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 19:01:30.749376  417385 crio.go:462] duration metric: took 1.798219352s to copy over tarball
	I0916 19:01:30.749463  417385 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 19:01:33.458210  417385 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.708709884s)
	I0916 19:01:33.458248  417385 crio.go:469] duration metric: took 2.708840499s to extract the tarball
	I0916 19:01:33.458259  417385 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 19:01:33.502608  417385 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:01:33.551413  417385 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 19:01:33.551448  417385 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 19:01:33.551532  417385 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:01:33.551561  417385 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 19:01:33.551589  417385 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:33.551535  417385 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:33.551563  417385 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:33.551569  417385 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:33.551535  417385 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:33.551578  417385 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 19:01:33.553367  417385 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:33.553376  417385 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:33.553392  417385 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 19:01:33.553366  417385 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:33.553430  417385 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:01:33.553380  417385 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:33.553452  417385 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:33.553473  417385 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 19:01:33.769365  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:33.784002  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:33.785011  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 19:01:33.797872  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 19:01:33.800362  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:33.847684  417385 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 19:01:33.847755  417385 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:33.847806  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:33.850527  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:33.898188  417385 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 19:01:33.898244  417385 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:33.898299  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:33.916610  417385 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 19:01:33.916731  417385 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 19:01:33.916802  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:33.926597  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:33.931303  417385 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 19:01:33.931355  417385 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 19:01:33.931422  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:33.983854  417385 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 19:01:33.983933  417385 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 19:01:33.983978  417385 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:33.983978  417385 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:33.984007  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:33.984019  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:33.984019  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:33.983935  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:33.984008  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 19:01:34.005467  417385 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 19:01:34.005524  417385 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:34.005552  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 19:01:34.005572  417385 ssh_runner.go:195] Run: which crictl
	I0916 19:01:34.085870  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:34.085900  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:34.087341  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:34.087411  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 19:01:34.087432  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:34.087491  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 19:01:34.087531  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:34.246303  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:01:34.246379  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:34.267299  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:34.267654  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:34.267888  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 19:01:34.395213  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 19:01:34.395260  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:01:34.395261  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:01:34.395380  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 19:01:34.395436  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 19:01:34.395473  417385 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:01:34.395542  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 19:01:34.478057  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 19:01:34.499111  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 19:01:34.518453  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 19:01:34.518513  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 19:01:34.518591  417385 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 19:01:34.641216  417385 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:01:34.789349  417385 cache_images.go:92] duration metric: took 1.237877452s to LoadCachedImages
	W0916 19:01:34.789474  417385 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0916 19:01:34.789491  417385 kubeadm.go:934] updating node { 192.168.50.23 8443 v1.20.0 crio true true} ...
	I0916 19:01:34.789620  417385 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-698346 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 19:01:34.789716  417385 ssh_runner.go:195] Run: crio config
	I0916 19:01:34.848878  417385 cni.go:84] Creating CNI manager for ""
	I0916 19:01:34.848912  417385 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:01:34.848941  417385 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 19:01:34.848970  417385 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.23 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-698346 NodeName:kubernetes-upgrade-698346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 19:01:34.849161  417385 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-698346"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 19:01:34.849254  417385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 19:01:34.860782  417385 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 19:01:34.860878  417385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 19:01:34.872353  417385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0916 19:01:34.891054  417385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 19:01:34.913479  417385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0916 19:01:34.936639  417385 ssh_runner.go:195] Run: grep 192.168.50.23	control-plane.minikube.internal$ /etc/hosts
	I0916 19:01:34.941140  417385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.23	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:01:34.958495  417385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:01:35.091757  417385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:01:35.112249  417385 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346 for IP: 192.168.50.23
	I0916 19:01:35.112284  417385 certs.go:194] generating shared ca certs ...
	I0916 19:01:35.112308  417385 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.112536  417385 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 19:01:35.112678  417385 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 19:01:35.112718  417385 certs.go:256] generating profile certs ...
	I0916 19:01:35.112814  417385 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.key
	I0916 19:01:35.112834  417385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.crt with IP's: []
	I0916 19:01:35.329543  417385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.crt ...
	I0916 19:01:35.329576  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.crt: {Name:mk52cdd65f0d548d0d51f78e1e1e14005720f263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.329756  417385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.key ...
	I0916 19:01:35.329768  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.key: {Name:mkef418701a9a61606f62ad5e2ff37ec6af657a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.329860  417385 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key.edafc3c5
	I0916 19:01:35.329881  417385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt.edafc3c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.23]
	I0916 19:01:35.482685  417385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt.edafc3c5 ...
	I0916 19:01:35.482720  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt.edafc3c5: {Name:mk7580dc3863dbc4a5ca3f6caee65e6426bcdd79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.482917  417385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key.edafc3c5 ...
	I0916 19:01:35.482935  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key.edafc3c5: {Name:mkdd04d4abded939cd99e8acdda3c38de74f71fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.483019  417385 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt.edafc3c5 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt
	I0916 19:01:35.483128  417385 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key.edafc3c5 -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key
	I0916 19:01:35.483194  417385 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.key
	I0916 19:01:35.483212  417385 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.crt with IP's: []
	I0916 19:01:35.655713  417385 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.crt ...
	I0916 19:01:35.655749  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.crt: {Name:mk63d7ca06c96bb2dd085f13a9eadaaf83eb0f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.747636  417385 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.key ...
	I0916 19:01:35.747706  417385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.key: {Name:mk22e9b0d099d43cc77a479676ab5ba52c323577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:01:35.747969  417385 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 19:01:35.748013  417385 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 19:01:35.748025  417385 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 19:01:35.748050  417385 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 19:01:35.748083  417385 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 19:01:35.748106  417385 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 19:01:35.748150  417385 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:01:35.748883  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 19:01:35.779467  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 19:01:35.805512  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 19:01:35.831727  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 19:01:35.858673  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 19:01:35.884667  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 19:01:35.910578  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 19:01:35.959885  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 19:01:36.003235  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 19:01:36.034597  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 19:01:36.060964  417385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 19:01:36.089128  417385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 19:01:36.109326  417385 ssh_runner.go:195] Run: openssl version
	I0916 19:01:36.115737  417385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 19:01:36.128678  417385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 19:01:36.133855  417385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 19:01:36.133927  417385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 19:01:36.140140  417385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 19:01:36.151764  417385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 19:01:36.163990  417385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 19:01:36.169029  417385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 19:01:36.169109  417385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 19:01:36.175214  417385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 19:01:36.187453  417385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 19:01:36.199710  417385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:01:36.205009  417385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:01:36.205093  417385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:01:36.211592  417385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 19:01:36.224220  417385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 19:01:36.229163  417385 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 19:01:36.229234  417385 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:01:36.229322  417385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 19:01:36.229373  417385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 19:01:36.275666  417385 cri.go:89] found id: ""
	I0916 19:01:36.275756  417385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 19:01:36.287758  417385 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 19:01:36.299783  417385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 19:01:36.311388  417385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 19:01:36.311409  417385 kubeadm.go:157] found existing configuration files:
	
	I0916 19:01:36.311455  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 19:01:36.323277  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 19:01:36.323345  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 19:01:36.334084  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 19:01:36.344266  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 19:01:36.344339  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 19:01:36.357144  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 19:01:36.371347  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 19:01:36.371437  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 19:01:36.385817  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 19:01:36.399991  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 19:01:36.400072  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 19:01:36.411410  417385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 19:01:36.545957  417385 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 19:01:36.546046  417385 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 19:01:36.707460  417385 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 19:01:36.707626  417385 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 19:01:36.707789  417385 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 19:01:36.935518  417385 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 19:01:36.938465  417385 out.go:235]   - Generating certificates and keys ...
	I0916 19:01:36.938589  417385 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 19:01:36.938683  417385 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 19:01:36.997801  417385 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 19:01:37.134242  417385 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 19:01:37.272819  417385 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 19:01:37.328244  417385 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 19:01:37.568616  417385 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 19:01:37.568843  417385 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-698346 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	I0916 19:01:37.650854  417385 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 19:01:37.651075  417385 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-698346 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	I0916 19:01:37.845602  417385 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 19:01:38.037795  417385 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 19:01:38.173770  417385 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 19:01:38.174062  417385 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 19:01:38.792388  417385 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 19:01:39.041289  417385 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 19:01:39.225958  417385 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 19:01:39.590952  417385 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 19:01:39.609482  417385 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 19:01:39.611260  417385 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 19:01:39.611327  417385 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 19:01:39.755214  417385 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 19:01:39.757327  417385 out.go:235]   - Booting up control plane ...
	I0916 19:01:39.757465  417385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 19:01:39.764122  417385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 19:01:39.765272  417385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 19:01:39.766220  417385 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 19:01:39.780396  417385 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 19:02:19.776183  417385 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0916 19:02:19.776303  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:02:19.776512  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:02:24.777151  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:02:24.777410  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:02:34.776584  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:02:34.776875  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:02:54.775860  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:02:54.776142  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:03:34.777499  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:03:34.777748  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:03:34.777763  417385 kubeadm.go:310] 
	I0916 19:03:34.777827  417385 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0916 19:03:34.777880  417385 kubeadm.go:310] 		timed out waiting for the condition
	I0916 19:03:34.777891  417385 kubeadm.go:310] 
	I0916 19:03:34.777937  417385 kubeadm.go:310] 	This error is likely caused by:
	I0916 19:03:34.778017  417385 kubeadm.go:310] 		- The kubelet is not running
	I0916 19:03:34.778157  417385 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0916 19:03:34.778171  417385 kubeadm.go:310] 
	I0916 19:03:34.778310  417385 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0916 19:03:34.778356  417385 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0916 19:03:34.778421  417385 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0916 19:03:34.778442  417385 kubeadm.go:310] 
	I0916 19:03:34.778536  417385 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0916 19:03:34.778639  417385 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0916 19:03:34.778650  417385 kubeadm.go:310] 
	I0916 19:03:34.778754  417385 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0916 19:03:34.778865  417385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0916 19:03:34.778962  417385 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0916 19:03:34.779026  417385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0916 19:03:34.779033  417385 kubeadm.go:310] 
	I0916 19:03:34.779858  417385 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 19:03:34.780006  417385 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0916 19:03:34.780105  417385 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0916 19:03:34.780283  417385 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-698346 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-698346 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-698346 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-698346 localhost] and IPs [192.168.50.23 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0916 19:03:34.780332  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0916 19:03:36.940651  417385 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.160277224s)
	I0916 19:03:36.940756  417385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 19:03:36.954676  417385 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 19:03:36.967983  417385 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 19:03:36.968008  417385 kubeadm.go:157] found existing configuration files:
	
	I0916 19:03:36.968054  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 19:03:36.977419  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 19:03:36.977489  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 19:03:36.987960  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 19:03:36.997455  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 19:03:36.997513  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 19:03:37.006814  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 19:03:37.015726  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 19:03:37.015803  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 19:03:37.024943  417385 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 19:03:37.033659  417385 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 19:03:37.033715  417385 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 19:03:37.042821  417385 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 19:03:37.108250  417385 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 19:03:37.108309  417385 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 19:03:37.253899  417385 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 19:03:37.254026  417385 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 19:03:37.254173  417385 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 19:03:37.440232  417385 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 19:03:37.442235  417385 out.go:235]   - Generating certificates and keys ...
	I0916 19:03:37.442340  417385 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 19:03:37.442450  417385 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 19:03:37.442578  417385 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 19:03:37.442674  417385 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 19:03:37.442786  417385 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 19:03:37.443015  417385 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 19:03:37.443548  417385 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 19:03:37.444095  417385 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 19:03:37.444531  417385 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 19:03:37.445109  417385 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 19:03:37.445205  417385 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 19:03:37.445288  417385 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 19:03:37.567931  417385 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 19:03:37.679197  417385 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 19:03:37.794151  417385 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 19:03:37.916771  417385 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 19:03:37.930444  417385 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 19:03:37.931538  417385 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 19:03:37.931623  417385 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 19:03:38.085192  417385 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 19:03:38.087013  417385 out.go:235]   - Booting up control plane ...
	I0916 19:03:38.087129  417385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 19:03:38.095218  417385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 19:03:38.096461  417385 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 19:03:38.097771  417385 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 19:03:38.104440  417385 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 19:04:18.107838  417385 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0916 19:04:18.107951  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:04:18.108219  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:04:23.108852  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:04:23.109106  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:04:33.109909  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:04:33.110166  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:04:53.109389  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:04:53.109666  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:05:33.109471  417385 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0916 19:05:33.109767  417385 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0916 19:05:33.109789  417385 kubeadm.go:310] 
	I0916 19:05:33.109851  417385 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0916 19:05:33.109907  417385 kubeadm.go:310] 		timed out waiting for the condition
	I0916 19:05:33.109916  417385 kubeadm.go:310] 
	I0916 19:05:33.109962  417385 kubeadm.go:310] 	This error is likely caused by:
	I0916 19:05:33.110032  417385 kubeadm.go:310] 		- The kubelet is not running
	I0916 19:05:33.110196  417385 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0916 19:05:33.110206  417385 kubeadm.go:310] 
	I0916 19:05:33.110342  417385 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0916 19:05:33.110403  417385 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0916 19:05:33.110461  417385 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0916 19:05:33.110471  417385 kubeadm.go:310] 
	I0916 19:05:33.110605  417385 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0916 19:05:33.110723  417385 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0916 19:05:33.110747  417385 kubeadm.go:310] 
	I0916 19:05:33.110877  417385 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0916 19:05:33.111002  417385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0916 19:05:33.111118  417385 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0916 19:05:33.111224  417385 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0916 19:05:33.111239  417385 kubeadm.go:310] 
	I0916 19:05:33.112017  417385 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 19:05:33.112141  417385 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0916 19:05:33.112259  417385 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0916 19:05:33.112351  417385 kubeadm.go:394] duration metric: took 3m56.883123764s to StartCluster
	I0916 19:05:33.112403  417385 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0916 19:05:33.112471  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 19:05:33.154236  417385 cri.go:89] found id: ""
	I0916 19:05:33.154267  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.154279  417385 logs.go:278] No container was found matching "kube-apiserver"
	I0916 19:05:33.154295  417385 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0916 19:05:33.154362  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 19:05:33.191262  417385 cri.go:89] found id: ""
	I0916 19:05:33.191298  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.191310  417385 logs.go:278] No container was found matching "etcd"
	I0916 19:05:33.191318  417385 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0916 19:05:33.191389  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 19:05:33.228763  417385 cri.go:89] found id: ""
	I0916 19:05:33.228795  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.228808  417385 logs.go:278] No container was found matching "coredns"
	I0916 19:05:33.228816  417385 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0916 19:05:33.228893  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 19:05:33.268146  417385 cri.go:89] found id: ""
	I0916 19:05:33.268186  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.268198  417385 logs.go:278] No container was found matching "kube-scheduler"
	I0916 19:05:33.268207  417385 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0916 19:05:33.268272  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 19:05:33.303942  417385 cri.go:89] found id: ""
	I0916 19:05:33.303975  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.303986  417385 logs.go:278] No container was found matching "kube-proxy"
	I0916 19:05:33.303995  417385 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 19:05:33.304064  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 19:05:33.344416  417385 cri.go:89] found id: ""
	I0916 19:05:33.344444  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.344454  417385 logs.go:278] No container was found matching "kube-controller-manager"
	I0916 19:05:33.344460  417385 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0916 19:05:33.344543  417385 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 19:05:33.384522  417385 cri.go:89] found id: ""
	I0916 19:05:33.384556  417385 logs.go:276] 0 containers: []
	W0916 19:05:33.384566  417385 logs.go:278] No container was found matching "kindnet"
	I0916 19:05:33.384578  417385 logs.go:123] Gathering logs for kubelet ...
	I0916 19:05:33.384593  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 19:05:33.441751  417385 logs.go:123] Gathering logs for dmesg ...
	I0916 19:05:33.441791  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 19:05:33.460208  417385 logs.go:123] Gathering logs for describe nodes ...
	I0916 19:05:33.460244  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 19:05:33.630084  417385 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 19:05:33.630111  417385 logs.go:123] Gathering logs for CRI-O ...
	I0916 19:05:33.630127  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0916 19:05:33.767325  417385 logs.go:123] Gathering logs for container status ...
	I0916 19:05:33.767381  417385 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0916 19:05:33.817682  417385 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0916 19:05:33.817770  417385 out.go:270] * 
	* 
	W0916 19:05:33.817887  417385 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 19:05:33.817909  417385 out.go:270] * 
	* 
	W0916 19:05:33.818896  417385 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 19:05:33.924431  417385 out.go:201] 
	W0916 19:05:34.084139  417385 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0916 19:05:34.084201  417385 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0916 19:05:34.084223  417385 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0916 19:05:34.208453  417385 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-698346
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-698346: (2.638311221s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-698346 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-698346 status --format={{.Host}}: exit status 7 (75.646942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.056725428s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-698346 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (98.418997ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-698346] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-698346
	    minikube start -p kubernetes-upgrade-698346 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6983462 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-698346 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-698346 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.660672716s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-16 19:07:30.044630687 +0000 UTC m=+6190.758101470
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-698346 -n kubernetes-upgrade-698346
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-698346 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-698346 logs -n 25: (1.859551795s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-591484 sudo                  | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo                  | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo                  | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo cat              | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo cat              | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo                  | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo                  | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo                  | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo find             | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-591484 sudo crio             | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-591484                       | cilium-591484             | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC | 16 Sep 24 19:04 UTC |
	| start   | -p force-systemd-flag-669400           | force-systemd-flag-669400 | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC | 16 Sep 24 19:05 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-886101            | force-systemd-env-886101  | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC | 16 Sep 24 19:04 UTC |
	| start   | -p cert-options-196343                 | cert-options-196343       | jenkins | v1.34.0 | 16 Sep 24 19:04 UTC | 16 Sep 24 19:06 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-698346           | kubernetes-upgrade-698346 | jenkins | v1.34.0 | 16 Sep 24 19:05 UTC | 16 Sep 24 19:05 UTC |
	| start   | -p kubernetes-upgrade-698346           | kubernetes-upgrade-698346 | jenkins | v1.34.0 | 16 Sep 24 19:05 UTC | 16 Sep 24 19:06 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-669400 ssh cat      | force-systemd-flag-669400 | jenkins | v1.34.0 | 16 Sep 24 19:05 UTC | 16 Sep 24 19:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-669400           | force-systemd-flag-669400 | jenkins | v1.34.0 | 16 Sep 24 19:05 UTC | 16 Sep 24 19:05 UTC |
	| start   | -p pause-671192 --memory=2048          | pause-671192              | jenkins | v1.34.0 | 16 Sep 24 19:05 UTC |                     |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-196343 ssh                | cert-options-196343       | jenkins | v1.34.0 | 16 Sep 24 19:06 UTC | 16 Sep 24 19:06 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-196343 -- sudo         | cert-options-196343       | jenkins | v1.34.0 | 16 Sep 24 19:06 UTC | 16 Sep 24 19:06 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-196343                 | cert-options-196343       | jenkins | v1.34.0 | 16 Sep 24 19:06 UTC | 16 Sep 24 19:06 UTC |
	| start   | -p old-k8s-version-923816              | old-k8s-version-923816    | jenkins | v1.34.0 | 16 Sep 24 19:06 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-698346           | kubernetes-upgrade-698346 | jenkins | v1.34.0 | 16 Sep 24 19:06 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-698346           | kubernetes-upgrade-698346 | jenkins | v1.34.0 | 16 Sep 24 19:06 UTC | 16 Sep 24 19:07 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 19:06:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 19:06:35.438215  424928 out.go:345] Setting OutFile to fd 1 ...
	I0916 19:06:35.438782  424928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:06:35.438830  424928 out.go:358] Setting ErrFile to fd 2...
	I0916 19:06:35.438848  424928 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 19:06:35.439382  424928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 19:06:35.440787  424928 out.go:352] Setting JSON to false
	I0916 19:06:35.442304  424928 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":10138,"bootTime":1726503457,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 19:06:35.442466  424928 start.go:139] virtualization: kvm guest
	I0916 19:06:35.444848  424928 out.go:177] * [kubernetes-upgrade-698346] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 19:06:35.447168  424928 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 19:06:35.447185  424928 notify.go:220] Checking for updates...
	I0916 19:06:35.450541  424928 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 19:06:35.452128  424928 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 19:06:35.453595  424928 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 19:06:35.454994  424928 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 19:06:35.457111  424928 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 19:06:35.459081  424928 config.go:182] Loaded profile config "kubernetes-upgrade-698346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 19:06:35.459579  424928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 19:06:35.459638  424928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 19:06:35.476853  424928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43299
	I0916 19:06:35.477460  424928 main.go:141] libmachine: () Calling .GetVersion
	I0916 19:06:35.478109  424928 main.go:141] libmachine: Using API Version  1
	I0916 19:06:35.478141  424928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 19:06:35.478602  424928 main.go:141] libmachine: () Calling .GetMachineName
	I0916 19:06:35.478898  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:06:35.479273  424928 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 19:06:35.479761  424928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 19:06:35.479815  424928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 19:06:35.499954  424928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I0916 19:06:35.500606  424928 main.go:141] libmachine: () Calling .GetVersion
	I0916 19:06:35.501247  424928 main.go:141] libmachine: Using API Version  1
	I0916 19:06:35.501269  424928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 19:06:35.501673  424928 main.go:141] libmachine: () Calling .GetMachineName
	I0916 19:06:35.501867  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:06:35.544325  424928 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 19:06:35.545854  424928 start.go:297] selected driver: kvm2
	I0916 19:06:35.545875  424928 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:06:35.546033  424928 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 19:06:35.547073  424928 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 19:06:35.547176  424928 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 19:06:35.564047  424928 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 19:06:35.564640  424928 cni.go:84] Creating CNI manager for ""
	I0916 19:06:35.564712  424928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:06:35.564772  424928 start.go:340] cluster config:
	{Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-698346 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:06:35.564958  424928 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 19:06:35.567186  424928 out.go:177] * Starting "kubernetes-upgrade-698346" primary control-plane node in "kubernetes-upgrade-698346" cluster
	I0916 19:06:36.246320  424613 start.go:364] duration metric: took 25.060267504s to acquireMachinesLock for "old-k8s-version-923816"
	I0916 19:06:36.246398  424613 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-923816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-923816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 19:06:36.246562  424613 start.go:125] createHost starting for "" (driver="kvm2")
	I0916 19:06:34.627196  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.627756  424211 main.go:141] libmachine: (pause-671192) Found IP for machine: 192.168.72.172
	I0916 19:06:34.627774  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has current primary IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.627823  424211 main.go:141] libmachine: (pause-671192) Reserving static IP address...
	I0916 19:06:34.628221  424211 main.go:141] libmachine: (pause-671192) DBG | unable to find host DHCP lease matching {name: "pause-671192", mac: "52:54:00:47:be:53", ip: "192.168.72.172"} in network mk-pause-671192
	I0916 19:06:34.708325  424211 main.go:141] libmachine: (pause-671192) DBG | Getting to WaitForSSH function...
	I0916 19:06:34.708348  424211 main.go:141] libmachine: (pause-671192) Reserved static IP address: 192.168.72.172
	I0916 19:06:34.708359  424211 main.go:141] libmachine: (pause-671192) Waiting for SSH to be available...
	I0916 19:06:34.711168  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.711700  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:minikube Clientid:01:52:54:00:47:be:53}
	I0916 19:06:34.711729  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.711839  424211 main.go:141] libmachine: (pause-671192) DBG | Using SSH client type: external
	I0916 19:06:34.711901  424211 main.go:141] libmachine: (pause-671192) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/pause-671192/id_rsa (-rw-------)
	I0916 19:06:34.711926  424211 main.go:141] libmachine: (pause-671192) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.172 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/pause-671192/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 19:06:34.711943  424211 main.go:141] libmachine: (pause-671192) DBG | About to run SSH command:
	I0916 19:06:34.711950  424211 main.go:141] libmachine: (pause-671192) DBG | exit 0
	I0916 19:06:34.841411  424211 main.go:141] libmachine: (pause-671192) DBG | SSH cmd err, output: <nil>: 
	I0916 19:06:34.841728  424211 main.go:141] libmachine: (pause-671192) KVM machine creation complete!
	I0916 19:06:34.842140  424211 main.go:141] libmachine: (pause-671192) Calling .GetConfigRaw
	I0916 19:06:34.842828  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:34.843077  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:34.843251  424211 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 19:06:34.843262  424211 main.go:141] libmachine: (pause-671192) Calling .GetState
	I0916 19:06:34.844375  424211 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 19:06:34.844385  424211 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 19:06:34.844391  424211 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 19:06:34.844397  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:34.846973  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.847342  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:34.847376  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.847531  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:34.847706  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:34.847869  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:34.848027  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:34.848203  424211 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:34.848471  424211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0916 19:06:34.848479  424211 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 19:06:34.961113  424211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:06:34.961128  424211 main.go:141] libmachine: Detecting the provisioner...
	I0916 19:06:34.961134  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:34.964514  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.964994  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:34.965016  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:34.965191  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:34.965433  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:34.965577  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:34.965732  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:34.965876  424211 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:34.966059  424211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0916 19:06:34.966064  424211 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 19:06:35.079021  424211 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 19:06:35.079114  424211 main.go:141] libmachine: found compatible host: buildroot
	I0916 19:06:35.079121  424211 main.go:141] libmachine: Provisioning with buildroot...
	I0916 19:06:35.079127  424211 main.go:141] libmachine: (pause-671192) Calling .GetMachineName
	I0916 19:06:35.079403  424211 buildroot.go:166] provisioning hostname "pause-671192"
	I0916 19:06:35.079422  424211 main.go:141] libmachine: (pause-671192) Calling .GetMachineName
	I0916 19:06:35.079614  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:35.082599  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.082983  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:35.082999  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.083240  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:35.083478  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.083656  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.083824  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:35.084042  424211 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:35.084240  424211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0916 19:06:35.084249  424211 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-671192 && echo "pause-671192" | sudo tee /etc/hostname
	I0916 19:06:35.217547  424211 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-671192
	
	I0916 19:06:35.217567  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:35.220886  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.221393  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:35.221414  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.221628  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:35.221815  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.221988  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.222121  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:35.222276  424211 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:35.222501  424211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0916 19:06:35.222522  424211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-671192' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-671192/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-671192' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 19:06:35.357031  424211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:06:35.357053  424211 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 19:06:35.357096  424211 buildroot.go:174] setting up certificates
	I0916 19:06:35.357107  424211 provision.go:84] configureAuth start
	I0916 19:06:35.357119  424211 main.go:141] libmachine: (pause-671192) Calling .GetMachineName
	I0916 19:06:35.357431  424211 main.go:141] libmachine: (pause-671192) Calling .GetIP
	I0916 19:06:35.360678  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.361133  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:35.361156  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.361331  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:35.363950  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.364299  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:35.364321  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.364514  424211 provision.go:143] copyHostCerts
	I0916 19:06:35.364580  424211 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 19:06:35.364589  424211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 19:06:35.364652  424211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 19:06:35.364822  424211 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 19:06:35.364828  424211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 19:06:35.364858  424211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 19:06:35.364992  424211 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 19:06:35.364998  424211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 19:06:35.365026  424211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 19:06:35.365117  424211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.pause-671192 san=[127.0.0.1 192.168.72.172 localhost minikube pause-671192]
	I0916 19:06:35.580985  424211 provision.go:177] copyRemoteCerts
	I0916 19:06:35.581037  424211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 19:06:35.581064  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:35.584225  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.584670  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:35.584700  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.584914  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:35.585132  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.585241  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:35.585325  424211 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/pause-671192/id_rsa Username:docker}
	I0916 19:06:35.668273  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 19:06:35.699169  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 19:06:35.727764  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 19:06:35.754008  424211 provision.go:87] duration metric: took 396.885999ms to configureAuth
	I0916 19:06:35.754032  424211 buildroot.go:189] setting minikube options for container-runtime
	I0916 19:06:35.754249  424211 config.go:182] Loaded profile config "pause-671192": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 19:06:35.754350  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:35.757252  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.757650  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:35.757674  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:35.757925  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:35.758112  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.758339  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:35.758551  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:35.758759  424211 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:35.758923  424211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0916 19:06:35.758933  424211 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 19:06:35.997019  424211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 19:06:35.997041  424211 main.go:141] libmachine: Checking connection to Docker...
	I0916 19:06:35.997050  424211 main.go:141] libmachine: (pause-671192) Calling .GetURL
	I0916 19:06:35.998592  424211 main.go:141] libmachine: (pause-671192) DBG | Using libvirt version 6000000
	I0916 19:06:36.000751  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.001204  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.001235  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.001463  424211 main.go:141] libmachine: Docker is up and running!
	I0916 19:06:36.001473  424211 main.go:141] libmachine: Reticulating splines...
	I0916 19:06:36.001480  424211 client.go:171] duration metric: took 24.577487104s to LocalClient.Create
	I0916 19:06:36.001501  424211 start.go:167] duration metric: took 24.577540139s to libmachine.API.Create "pause-671192"
	I0916 19:06:36.001509  424211 start.go:293] postStartSetup for "pause-671192" (driver="kvm2")
	I0916 19:06:36.001519  424211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 19:06:36.001541  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:36.001889  424211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 19:06:36.001912  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:36.004513  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.004878  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.004931  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.005067  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:36.005274  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:36.005466  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:36.005605  424211 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/pause-671192/id_rsa Username:docker}
	I0916 19:06:36.087636  424211 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 19:06:36.092236  424211 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 19:06:36.092273  424211 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 19:06:36.092344  424211 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 19:06:36.092419  424211 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 19:06:36.092505  424211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 19:06:36.102330  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:06:36.126948  424211 start.go:296] duration metric: took 125.423363ms for postStartSetup
	I0916 19:06:36.126995  424211 main.go:141] libmachine: (pause-671192) Calling .GetConfigRaw
	I0916 19:06:36.127647  424211 main.go:141] libmachine: (pause-671192) Calling .GetIP
	I0916 19:06:36.130700  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.131154  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.131185  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.131613  424211 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/config.json ...
	I0916 19:06:36.131863  424211 start.go:128] duration metric: took 24.729609985s to createHost
	I0916 19:06:36.131881  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:36.134130  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.134431  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.134451  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.134573  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:36.134760  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:36.134914  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:36.135020  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:36.135152  424211 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:36.135352  424211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.72.172 22 <nil> <nil>}
	I0916 19:06:36.135357  424211 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 19:06:36.246120  424211 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726513596.221619355
	
	I0916 19:06:36.246136  424211 fix.go:216] guest clock: 1726513596.221619355
	I0916 19:06:36.246146  424211 fix.go:229] Guest: 2024-09-16 19:06:36.221619355 +0000 UTC Remote: 2024-09-16 19:06:36.131870493 +0000 UTC m=+48.453714817 (delta=89.748862ms)
	I0916 19:06:36.246205  424211 fix.go:200] guest clock delta is within tolerance: 89.748862ms
	I0916 19:06:36.246212  424211 start.go:83] releasing machines lock for "pause-671192", held for 24.844132644s
	I0916 19:06:36.246251  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:36.246539  424211 main.go:141] libmachine: (pause-671192) Calling .GetIP
	I0916 19:06:36.249942  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.250317  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.250343  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.250514  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:36.251089  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:36.251286  424211 main.go:141] libmachine: (pause-671192) Calling .DriverName
	I0916 19:06:36.251432  424211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 19:06:36.251504  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:36.251515  424211 ssh_runner.go:195] Run: cat /version.json
	I0916 19:06:36.251528  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHHostname
	I0916 19:06:36.254373  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.254393  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.254754  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.254775  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.254797  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:36.254806  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:36.254915  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:36.255056  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHPort
	I0916 19:06:36.255111  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:36.255199  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHKeyPath
	I0916 19:06:36.255301  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:36.255309  424211 main.go:141] libmachine: (pause-671192) Calling .GetSSHUsername
	I0916 19:06:36.255468  424211 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/pause-671192/id_rsa Username:docker}
	I0916 19:06:36.255468  424211 sshutil.go:53] new ssh client: &{IP:192.168.72.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/pause-671192/id_rsa Username:docker}
	I0916 19:06:36.334401  424211 ssh_runner.go:195] Run: systemctl --version
	I0916 19:06:36.361607  424211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 19:06:36.541860  424211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 19:06:36.548314  424211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 19:06:36.548400  424211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 19:06:36.566908  424211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 19:06:36.566927  424211 start.go:495] detecting cgroup driver to use...
	I0916 19:06:36.567006  424211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 19:06:36.588855  424211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 19:06:36.604458  424211 docker.go:217] disabling cri-docker service (if available) ...
	I0916 19:06:36.604510  424211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 19:06:36.621786  424211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 19:06:36.637965  424211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 19:06:36.767533  424211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 19:06:36.936993  424211 docker.go:233] disabling docker service ...
	I0916 19:06:36.937063  424211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 19:06:36.952700  424211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 19:06:36.967052  424211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 19:06:37.086796  424211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 19:06:37.204620  424211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 19:06:37.221744  424211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:06:37.247777  424211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 19:06:37.247849  424211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.260341  424211 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 19:06:37.260400  424211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.272254  424211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.284538  424211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.296635  424211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 19:06:37.309788  424211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.321931  424211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.342161  424211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:06:37.354899  424211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 19:06:37.366479  424211 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 19:06:37.366533  424211 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 19:06:37.384153  424211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 19:06:37.396171  424211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:06:37.515502  424211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 19:06:37.623563  424211 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 19:06:37.623629  424211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 19:06:37.628447  424211 start.go:563] Will wait 60s for crictl version
	I0916 19:06:37.628504  424211 ssh_runner.go:195] Run: which crictl
	I0916 19:06:37.633274  424211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 19:06:37.674935  424211 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 19:06:37.675052  424211 ssh_runner.go:195] Run: crio --version
	I0916 19:06:37.704988  424211 ssh_runner.go:195] Run: crio --version
	I0916 19:06:37.738547  424211 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 19:06:35.568703  424928 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 19:06:35.568741  424928 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 19:06:35.568749  424928 cache.go:56] Caching tarball of preloaded images
	I0916 19:06:35.568841  424928 preload.go:172] Found /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0916 19:06:35.568848  424928 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0916 19:06:35.569006  424928 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/config.json ...
	I0916 19:06:35.569275  424928 start.go:360] acquireMachinesLock for kubernetes-upgrade-698346: {Name:mkb7b0b7fc061f26553e0890f28c9e138fe61a47 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0916 19:06:36.249192  424613 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0916 19:06:36.249429  424613 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 19:06:36.249496  424613 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 19:06:36.267261  424613 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43383
	I0916 19:06:36.267790  424613 main.go:141] libmachine: () Calling .GetVersion
	I0916 19:06:36.268412  424613 main.go:141] libmachine: Using API Version  1
	I0916 19:06:36.268435  424613 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 19:06:36.268852  424613 main.go:141] libmachine: () Calling .GetMachineName
	I0916 19:06:36.269122  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetMachineName
	I0916 19:06:36.269282  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:06:36.269432  424613 start.go:159] libmachine.API.Create for "old-k8s-version-923816" (driver="kvm2")
	I0916 19:06:36.269467  424613 client.go:168] LocalClient.Create starting
	I0916 19:06:36.269506  424613 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem
	I0916 19:06:36.269546  424613 main.go:141] libmachine: Decoding PEM data...
	I0916 19:06:36.269575  424613 main.go:141] libmachine: Parsing certificate...
	I0916 19:06:36.269653  424613 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem
	I0916 19:06:36.269685  424613 main.go:141] libmachine: Decoding PEM data...
	I0916 19:06:36.269700  424613 main.go:141] libmachine: Parsing certificate...
	I0916 19:06:36.269727  424613 main.go:141] libmachine: Running pre-create checks...
	I0916 19:06:36.269739  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .PreCreateCheck
	I0916 19:06:36.270174  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetConfigRaw
	I0916 19:06:36.270645  424613 main.go:141] libmachine: Creating machine...
	I0916 19:06:36.270661  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .Create
	I0916 19:06:36.270841  424613 main.go:141] libmachine: (old-k8s-version-923816) Creating KVM machine...
	I0916 19:06:36.272308  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found existing default KVM network
	I0916 19:06:36.274107  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:36.273936  424960 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001157f0}
	I0916 19:06:36.274130  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | created network xml: 
	I0916 19:06:36.274144  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | <network>
	I0916 19:06:36.274152  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |   <name>mk-old-k8s-version-923816</name>
	I0916 19:06:36.274165  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |   <dns enable='no'/>
	I0916 19:06:36.274175  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |   
	I0916 19:06:36.274185  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0916 19:06:36.274201  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |     <dhcp>
	I0916 19:06:36.274211  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0916 19:06:36.274225  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |     </dhcp>
	I0916 19:06:36.274234  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |   </ip>
	I0916 19:06:36.274239  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG |   
	I0916 19:06:36.274254  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | </network>
	I0916 19:06:36.274263  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | 
	I0916 19:06:36.279957  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | trying to create private KVM network mk-old-k8s-version-923816 192.168.39.0/24...
	I0916 19:06:36.359287  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | private KVM network mk-old-k8s-version-923816 192.168.39.0/24 created
	I0916 19:06:36.359352  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:36.359230  424960 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 19:06:36.359367  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting up store path in /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816 ...
	I0916 19:06:36.359386  424613 main.go:141] libmachine: (old-k8s-version-923816) Building disk image from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 19:06:36.359406  424613 main.go:141] libmachine: (old-k8s-version-923816) Downloading /home/jenkins/minikube-integration/19649-371203/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso...
	I0916 19:06:36.625385  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:36.625233  424960 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa...
	I0916 19:06:36.722479  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:36.722341  424960 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/old-k8s-version-923816.rawdisk...
	I0916 19:06:36.722514  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Writing magic tar header
	I0916 19:06:36.722584  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Writing SSH key tar header
	I0916 19:06:36.722610  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:36.722471  424960 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816 ...
	I0916 19:06:36.722646  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816 (perms=drwx------)
	I0916 19:06:36.722657  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816
	I0916 19:06:36.722685  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube/machines
	I0916 19:06:36.722695  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 19:06:36.722711  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19649-371203
	I0916 19:06:36.722728  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0916 19:06:36.722740  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube/machines (perms=drwxr-xr-x)
	I0916 19:06:36.722757  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203/.minikube (perms=drwxr-xr-x)
	I0916 19:06:36.722771  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting executable bit set on /home/jenkins/minikube-integration/19649-371203 (perms=drwxrwxr-x)
	I0916 19:06:36.722786  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0916 19:06:36.722804  424613 main.go:141] libmachine: (old-k8s-version-923816) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0916 19:06:36.722815  424613 main.go:141] libmachine: (old-k8s-version-923816) Creating domain...
	I0916 19:06:36.722836  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home/jenkins
	I0916 19:06:36.722849  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Checking permissions on dir: /home
	I0916 19:06:36.722856  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Skipping /home - not owner
	I0916 19:06:36.723984  424613 main.go:141] libmachine: (old-k8s-version-923816) define libvirt domain using xml: 
	I0916 19:06:36.724024  424613 main.go:141] libmachine: (old-k8s-version-923816) <domain type='kvm'>
	I0916 19:06:36.724036  424613 main.go:141] libmachine: (old-k8s-version-923816)   <name>old-k8s-version-923816</name>
	I0916 19:06:36.724048  424613 main.go:141] libmachine: (old-k8s-version-923816)   <memory unit='MiB'>2200</memory>
	I0916 19:06:36.724056  424613 main.go:141] libmachine: (old-k8s-version-923816)   <vcpu>2</vcpu>
	I0916 19:06:36.724064  424613 main.go:141] libmachine: (old-k8s-version-923816)   <features>
	I0916 19:06:36.724071  424613 main.go:141] libmachine: (old-k8s-version-923816)     <acpi/>
	I0916 19:06:36.724075  424613 main.go:141] libmachine: (old-k8s-version-923816)     <apic/>
	I0916 19:06:36.724083  424613 main.go:141] libmachine: (old-k8s-version-923816)     <pae/>
	I0916 19:06:36.724091  424613 main.go:141] libmachine: (old-k8s-version-923816)     
	I0916 19:06:36.724111  424613 main.go:141] libmachine: (old-k8s-version-923816)   </features>
	I0916 19:06:36.724133  424613 main.go:141] libmachine: (old-k8s-version-923816)   <cpu mode='host-passthrough'>
	I0916 19:06:36.724141  424613 main.go:141] libmachine: (old-k8s-version-923816)   
	I0916 19:06:36.724151  424613 main.go:141] libmachine: (old-k8s-version-923816)   </cpu>
	I0916 19:06:36.724158  424613 main.go:141] libmachine: (old-k8s-version-923816)   <os>
	I0916 19:06:36.724163  424613 main.go:141] libmachine: (old-k8s-version-923816)     <type>hvm</type>
	I0916 19:06:36.724168  424613 main.go:141] libmachine: (old-k8s-version-923816)     <boot dev='cdrom'/>
	I0916 19:06:36.724175  424613 main.go:141] libmachine: (old-k8s-version-923816)     <boot dev='hd'/>
	I0916 19:06:36.724180  424613 main.go:141] libmachine: (old-k8s-version-923816)     <bootmenu enable='no'/>
	I0916 19:06:36.724185  424613 main.go:141] libmachine: (old-k8s-version-923816)   </os>
	I0916 19:06:36.724190  424613 main.go:141] libmachine: (old-k8s-version-923816)   <devices>
	I0916 19:06:36.724196  424613 main.go:141] libmachine: (old-k8s-version-923816)     <disk type='file' device='cdrom'>
	I0916 19:06:36.724204  424613 main.go:141] libmachine: (old-k8s-version-923816)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/boot2docker.iso'/>
	I0916 19:06:36.724215  424613 main.go:141] libmachine: (old-k8s-version-923816)       <target dev='hdc' bus='scsi'/>
	I0916 19:06:36.724220  424613 main.go:141] libmachine: (old-k8s-version-923816)       <readonly/>
	I0916 19:06:36.724225  424613 main.go:141] libmachine: (old-k8s-version-923816)     </disk>
	I0916 19:06:36.724231  424613 main.go:141] libmachine: (old-k8s-version-923816)     <disk type='file' device='disk'>
	I0916 19:06:36.724238  424613 main.go:141] libmachine: (old-k8s-version-923816)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0916 19:06:36.724248  424613 main.go:141] libmachine: (old-k8s-version-923816)       <source file='/home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/old-k8s-version-923816.rawdisk'/>
	I0916 19:06:36.724254  424613 main.go:141] libmachine: (old-k8s-version-923816)       <target dev='hda' bus='virtio'/>
	I0916 19:06:36.724260  424613 main.go:141] libmachine: (old-k8s-version-923816)     </disk>
	I0916 19:06:36.724270  424613 main.go:141] libmachine: (old-k8s-version-923816)     <interface type='network'>
	I0916 19:06:36.724279  424613 main.go:141] libmachine: (old-k8s-version-923816)       <source network='mk-old-k8s-version-923816'/>
	I0916 19:06:36.724283  424613 main.go:141] libmachine: (old-k8s-version-923816)       <model type='virtio'/>
	I0916 19:06:36.724312  424613 main.go:141] libmachine: (old-k8s-version-923816)     </interface>
	I0916 19:06:36.724335  424613 main.go:141] libmachine: (old-k8s-version-923816)     <interface type='network'>
	I0916 19:06:36.724346  424613 main.go:141] libmachine: (old-k8s-version-923816)       <source network='default'/>
	I0916 19:06:36.724356  424613 main.go:141] libmachine: (old-k8s-version-923816)       <model type='virtio'/>
	I0916 19:06:36.724365  424613 main.go:141] libmachine: (old-k8s-version-923816)     </interface>
	I0916 19:06:36.724374  424613 main.go:141] libmachine: (old-k8s-version-923816)     <serial type='pty'>
	I0916 19:06:36.724383  424613 main.go:141] libmachine: (old-k8s-version-923816)       <target port='0'/>
	I0916 19:06:36.724390  424613 main.go:141] libmachine: (old-k8s-version-923816)     </serial>
	I0916 19:06:36.724405  424613 main.go:141] libmachine: (old-k8s-version-923816)     <console type='pty'>
	I0916 19:06:36.724414  424613 main.go:141] libmachine: (old-k8s-version-923816)       <target type='serial' port='0'/>
	I0916 19:06:36.724430  424613 main.go:141] libmachine: (old-k8s-version-923816)     </console>
	I0916 19:06:36.724441  424613 main.go:141] libmachine: (old-k8s-version-923816)     <rng model='virtio'>
	I0916 19:06:36.724454  424613 main.go:141] libmachine: (old-k8s-version-923816)       <backend model='random'>/dev/random</backend>
	I0916 19:06:36.724463  424613 main.go:141] libmachine: (old-k8s-version-923816)     </rng>
	I0916 19:06:36.724472  424613 main.go:141] libmachine: (old-k8s-version-923816)     
	I0916 19:06:36.724481  424613 main.go:141] libmachine: (old-k8s-version-923816)     
	I0916 19:06:36.724492  424613 main.go:141] libmachine: (old-k8s-version-923816)   </devices>
	I0916 19:06:36.724506  424613 main.go:141] libmachine: (old-k8s-version-923816) </domain>
	I0916 19:06:36.724521  424613 main.go:141] libmachine: (old-k8s-version-923816) 
	I0916 19:06:36.729575  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:8d:e9:36 in network default
	I0916 19:06:36.730123  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:36.730142  424613 main.go:141] libmachine: (old-k8s-version-923816) Ensuring networks are active...
	I0916 19:06:36.730812  424613 main.go:141] libmachine: (old-k8s-version-923816) Ensuring network default is active
	I0916 19:06:36.731125  424613 main.go:141] libmachine: (old-k8s-version-923816) Ensuring network mk-old-k8s-version-923816 is active
	I0916 19:06:36.731616  424613 main.go:141] libmachine: (old-k8s-version-923816) Getting domain xml...
	I0916 19:06:36.732407  424613 main.go:141] libmachine: (old-k8s-version-923816) Creating domain...
	I0916 19:06:38.091484  424613 main.go:141] libmachine: (old-k8s-version-923816) Waiting to get IP...
	I0916 19:06:38.092546  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:38.093087  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:38.093117  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:38.093052  424960 retry.go:31] will retry after 217.457237ms: waiting for machine to come up
	I0916 19:06:38.313031  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:38.313581  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:38.313611  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:38.313531  424960 retry.go:31] will retry after 296.742855ms: waiting for machine to come up
	I0916 19:06:38.612212  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:38.612838  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:38.612867  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:38.612746  424960 retry.go:31] will retry after 447.402986ms: waiting for machine to come up
	I0916 19:06:39.061497  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:39.062052  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:39.062077  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:39.062010  424960 retry.go:31] will retry after 438.76545ms: waiting for machine to come up
	I0916 19:06:39.502972  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:39.503565  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:39.503606  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:39.503519  424960 retry.go:31] will retry after 519.886591ms: waiting for machine to come up
	I0916 19:06:40.025434  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:40.026022  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:40.026071  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:40.025965  424960 retry.go:31] will retry after 623.498263ms: waiting for machine to come up
	I0916 19:06:40.651930  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:40.652600  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:40.652646  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:40.652547  424960 retry.go:31] will retry after 925.855365ms: waiting for machine to come up
	I0916 19:06:37.740390  424211 main.go:141] libmachine: (pause-671192) Calling .GetIP
	I0916 19:06:37.743911  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:37.744382  424211 main.go:141] libmachine: (pause-671192) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:be:53", ip: ""} in network mk-pause-671192: {Iface:virbr1 ExpiryTime:2024-09-16 20:06:26 +0000 UTC Type:0 Mac:52:54:00:47:be:53 Iaid: IPaddr:192.168.72.172 Prefix:24 Hostname:pause-671192 Clientid:01:52:54:00:47:be:53}
	I0916 19:06:37.744404  424211 main.go:141] libmachine: (pause-671192) DBG | domain pause-671192 has defined IP address 192.168.72.172 and MAC address 52:54:00:47:be:53 in network mk-pause-671192
	I0916 19:06:37.744710  424211 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0916 19:06:37.749391  424211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:06:37.766991  424211 kubeadm.go:883] updating cluster {Name:pause-671192 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-671192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 19:06:37.767124  424211 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 19:06:37.767190  424211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:06:37.811746  424211 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 19:06:37.811818  424211 ssh_runner.go:195] Run: which lz4
	I0916 19:06:37.816698  424211 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 19:06:37.823727  424211 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 19:06:37.823757  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0916 19:06:39.266118  424211 crio.go:462] duration metric: took 1.449466535s to copy over tarball
	I0916 19:06:39.266202  424211 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 19:06:41.508733  424211 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.242495514s)
	I0916 19:06:41.508763  424211 crio.go:469] duration metric: took 2.242619526s to extract the tarball
	I0916 19:06:41.508795  424211 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 19:06:41.546142  424211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:06:41.602513  424211 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 19:06:41.602530  424211 cache_images.go:84] Images are preloaded, skipping loading
	I0916 19:06:41.602540  424211 kubeadm.go:934] updating node { 192.168.72.172 8443 v1.31.1 crio true true} ...
	I0916 19:06:41.602660  424211 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-671192 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.172
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-671192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 19:06:41.602722  424211 ssh_runner.go:195] Run: crio config
	I0916 19:06:41.672520  424211 cni.go:84] Creating CNI manager for ""
	I0916 19:06:41.672533  424211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:06:41.672542  424211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 19:06:41.672562  424211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.172 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-671192 NodeName:pause-671192 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 19:06:41.672687  424211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-671192"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 19:06:41.672747  424211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 19:06:41.682773  424211 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 19:06:41.682842  424211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 19:06:41.692558  424211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0916 19:06:41.710774  424211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 19:06:41.729464  424211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0916 19:06:41.746849  424211 ssh_runner.go:195] Run: grep 192.168.72.172	control-plane.minikube.internal$ /etc/hosts
	I0916 19:06:41.751158  424211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.172	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:06:41.763974  424211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:06:41.898385  424211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:06:41.917638  424211 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192 for IP: 192.168.72.172
	I0916 19:06:41.917653  424211 certs.go:194] generating shared ca certs ...
	I0916 19:06:41.917673  424211 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:41.917867  424211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 19:06:41.917903  424211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 19:06:41.917909  424211 certs.go:256] generating profile certs ...
	I0916 19:06:41.917962  424211 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/client.key
	I0916 19:06:41.917991  424211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/client.crt with IP's: []
	I0916 19:06:41.980139  424211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/client.crt ...
	I0916 19:06:41.980158  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/client.crt: {Name:mk5fdb6cf36e317f937c8ddb5b723affc98eed3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:41.980378  424211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/client.key ...
	I0916 19:06:41.980386  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/client.key: {Name:mkadb971b44466afa0f75071486bd8d4f1b1a14d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:41.980506  424211 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.key.96ca525a
	I0916 19:06:41.980520  424211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.crt.96ca525a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.172]
	I0916 19:06:42.081477  424211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.crt.96ca525a ...
	I0916 19:06:42.081496  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.crt.96ca525a: {Name:mk268020e95a6f2b8fc3efffdf68db460f4c46fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:42.081695  424211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.key.96ca525a ...
	I0916 19:06:42.081706  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.key.96ca525a: {Name:mk78aa17a5fac207ffd44788eb356124539b46b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:42.081810  424211 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.crt.96ca525a -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.crt
	I0916 19:06:42.081887  424211 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.key.96ca525a -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.key
	I0916 19:06:42.081932  424211 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.key
	I0916 19:06:42.081943  424211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.crt with IP's: []
	I0916 19:06:42.164722  424211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.crt ...
	I0916 19:06:42.164739  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.crt: {Name:mk11f12f4ad4c4fc6527888f2edca790f21f8df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:42.164938  424211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.key ...
	I0916 19:06:42.164947  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.key: {Name:mk16131d7110acd61fd5a7973c6f74d4e567d868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:42.165147  424211 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 19:06:42.165183  424211 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 19:06:42.165189  424211 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 19:06:42.165209  424211 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 19:06:42.165227  424211 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 19:06:42.165245  424211 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 19:06:42.165279  424211 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:06:42.165847  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 19:06:42.196547  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 19:06:42.225714  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 19:06:42.255173  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 19:06:42.285359  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 19:06:42.313072  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 19:06:42.344131  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 19:06:42.372882  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/pause-671192/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 19:06:42.400047  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 19:06:42.430181  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 19:06:42.461199  424211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 19:06:42.488505  424211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 19:06:42.507073  424211 ssh_runner.go:195] Run: openssl version
	I0916 19:06:42.514573  424211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 19:06:42.526728  424211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 19:06:42.532084  424211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 19:06:42.532146  424211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 19:06:42.538756  424211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 19:06:42.554275  424211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 19:06:42.568900  424211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:06:42.573946  424211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:06:42.574013  424211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:06:42.580822  424211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 19:06:42.592484  424211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 19:06:42.604189  424211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 19:06:42.609365  424211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 19:06:42.609429  424211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 19:06:42.615824  424211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 19:06:42.627602  424211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 19:06:42.632211  424211 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 19:06:42.632260  424211 kubeadm.go:392] StartCluster: {Name:pause-671192 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-671192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:06:42.632322  424211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 19:06:42.632373  424211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 19:06:42.679437  424211 cri.go:89] found id: ""
	I0916 19:06:42.679513  424211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 19:06:42.689939  424211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 19:06:42.708716  424211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 19:06:42.728586  424211 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 19:06:42.728600  424211 kubeadm.go:157] found existing configuration files:
	
	I0916 19:06:42.728663  424211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 19:06:42.742578  424211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 19:06:42.742666  424211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 19:06:42.756528  424211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 19:06:42.769577  424211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 19:06:42.769641  424211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 19:06:42.779900  424211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 19:06:42.795726  424211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 19:06:42.795809  424211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 19:06:42.806798  424211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 19:06:42.816668  424211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 19:06:42.816767  424211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 19:06:42.831111  424211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 19:06:42.963960  424211 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 19:06:42.964114  424211 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 19:06:43.071042  424211 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 19:06:43.071176  424211 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 19:06:43.071289  424211 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 19:06:43.084290  424211 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 19:06:43.369887  424211 out.go:235]   - Generating certificates and keys ...
	I0916 19:06:43.370054  424211 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 19:06:43.370116  424211 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 19:06:43.370242  424211 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 19:06:43.481053  424211 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 19:06:43.695702  424211 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 19:06:44.039271  424211 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 19:06:44.249365  424211 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 19:06:44.249608  424211 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost pause-671192] and IPs [192.168.72.172 127.0.0.1 ::1]
	I0916 19:06:44.364519  424211 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 19:06:44.364744  424211 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost pause-671192] and IPs [192.168.72.172 127.0.0.1 ::1]
	I0916 19:06:44.592974  424211 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 19:06:44.801323  424211 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 19:06:45.196997  424211 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 19:06:45.197103  424211 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 19:06:45.316975  424211 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 19:06:45.490996  424211 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 19:06:45.584085  424211 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 19:06:45.669530  424211 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 19:06:45.841139  424211 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 19:06:45.841688  424211 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 19:06:45.845162  424211 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 19:06:41.579859  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:41.580404  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:41.580477  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:41.580356  424960 retry.go:31] will retry after 945.520391ms: waiting for machine to come up
	I0916 19:06:42.527457  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:42.528080  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:42.528109  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:42.528028  424960 retry.go:31] will retry after 1.58521292s: waiting for machine to come up
	I0916 19:06:44.114555  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:44.115069  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:44.115096  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:44.115010  424960 retry.go:31] will retry after 2.126944554s: waiting for machine to come up
	I0916 19:06:45.847311  424211 out.go:235]   - Booting up control plane ...
	I0916 19:06:45.847417  424211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 19:06:45.848585  424211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 19:06:45.849605  424211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 19:06:45.867316  424211 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 19:06:45.876896  424211 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 19:06:45.876966  424211 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 19:06:46.078994  424211 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 19:06:46.079163  424211 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 19:06:46.580203  424211 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.716753ms
	I0916 19:06:46.580331  424211 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 19:06:46.243747  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:46.244280  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:46.244313  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:46.244219  424960 retry.go:31] will retry after 2.371705355s: waiting for machine to come up
	I0916 19:06:48.618224  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:48.618727  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:48.618761  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:48.618682  424960 retry.go:31] will retry after 2.392129657s: waiting for machine to come up
	I0916 19:06:51.013856  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:51.014337  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:51.014392  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:51.014296  424960 retry.go:31] will retry after 2.928450755s: waiting for machine to come up
	I0916 19:06:52.081169  424211 kubeadm.go:310] [api-check] The API server is healthy after 5.503884895s
	I0916 19:06:52.097361  424211 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 19:06:52.133954  424211 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 19:06:52.190468  424211 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 19:06:52.190728  424211 kubeadm.go:310] [mark-control-plane] Marking the node pause-671192 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 19:06:52.209308  424211 kubeadm.go:310] [bootstrap-token] Using token: ci1pqh.qbqeprrvfa20e1zk
	I0916 19:06:52.211239  424211 out.go:235]   - Configuring RBAC rules ...
	I0916 19:06:52.211503  424211 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 19:06:52.223322  424211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 19:06:52.245323  424211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 19:06:52.255762  424211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 19:06:52.266513  424211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 19:06:52.279065  424211 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 19:06:52.492467  424211 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 19:06:52.939204  424211 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 19:06:53.489230  424211 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 19:06:53.490362  424211 kubeadm.go:310] 
	I0916 19:06:53.490461  424211 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 19:06:53.490467  424211 kubeadm.go:310] 
	I0916 19:06:53.490595  424211 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 19:06:53.490600  424211 kubeadm.go:310] 
	I0916 19:06:53.490631  424211 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 19:06:53.490736  424211 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 19:06:53.490818  424211 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 19:06:53.490823  424211 kubeadm.go:310] 
	I0916 19:06:53.490897  424211 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 19:06:53.490903  424211 kubeadm.go:310] 
	I0916 19:06:53.490969  424211 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 19:06:53.490974  424211 kubeadm.go:310] 
	I0916 19:06:53.491038  424211 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 19:06:53.491135  424211 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 19:06:53.491234  424211 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 19:06:53.491239  424211 kubeadm.go:310] 
	I0916 19:06:53.491376  424211 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 19:06:53.491492  424211 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 19:06:53.491498  424211 kubeadm.go:310] 
	I0916 19:06:53.491615  424211 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ci1pqh.qbqeprrvfa20e1zk \
	I0916 19:06:53.491744  424211 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e \
	I0916 19:06:53.491773  424211 kubeadm.go:310] 	--control-plane 
	I0916 19:06:53.491777  424211 kubeadm.go:310] 
	I0916 19:06:53.491903  424211 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 19:06:53.491912  424211 kubeadm.go:310] 
	I0916 19:06:53.492008  424211 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ci1pqh.qbqeprrvfa20e1zk \
	I0916 19:06:53.492151  424211 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7408e4252bcab2defa43085adb455f3261164f36f62c793c4554dcb65833df0e 
	I0916 19:06:53.492506  424211 kubeadm.go:310] W0916 19:06:42.937954     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 19:06:53.492883  424211 kubeadm.go:310] W0916 19:06:42.939583     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 19:06:53.493027  424211 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 19:06:53.493050  424211 cni.go:84] Creating CNI manager for ""
	I0916 19:06:53.493059  424211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:06:53.495218  424211 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0916 19:06:53.945937  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:53.946358  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find current IP address of domain old-k8s-version-923816 in network mk-old-k8s-version-923816
	I0916 19:06:53.946384  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | I0916 19:06:53.946293  424960 retry.go:31] will retry after 5.569297036s: waiting for machine to come up
	I0916 19:06:53.496988  424211 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0916 19:06:53.510200  424211 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0916 19:06:53.534255  424211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 19:06:53.534354  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:53.534391  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-671192 minikube.k8s.io/updated_at=2024_09_16T19_06_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=91d692c919753635ac118b7ed7ae5503b67c63c8 minikube.k8s.io/name=pause-671192 minikube.k8s.io/primary=true
	I0916 19:06:53.777182  424211 ops.go:34] apiserver oom_adj: -16
	I0916 19:06:53.777386  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:54.278140  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:54.778107  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:55.277470  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:55.777509  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:56.278137  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:56.778157  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:57.277712  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:57.777464  424211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 19:06:57.930077  424211 kubeadm.go:1113] duration metric: took 4.395794012s to wait for elevateKubeSystemPrivileges
	I0916 19:06:57.930107  424211 kubeadm.go:394] duration metric: took 15.297849556s to StartCluster
	I0916 19:06:57.930131  424211 settings.go:142] acquiring lock: {Name:mk9af1b5fb868180f97a2648a387fb06c7d5fde7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:57.930216  424211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 19:06:57.931460  424211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/kubeconfig: {Name:mk8f19e4e61aad6cdecf3a2028815277e5ffb248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:06:57.931721  424211 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.172 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0916 19:06:57.931735  424211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 19:06:57.931968  424211 config.go:182] Loaded profile config "pause-671192": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 19:06:57.933367  424211 out.go:177] * Verifying Kubernetes components...
	I0916 19:07:01.066334  424928 start.go:364] duration metric: took 25.497012263s to acquireMachinesLock for "kubernetes-upgrade-698346"
	I0916 19:07:01.066389  424928 start.go:96] Skipping create...Using existing machine configuration
	I0916 19:07:01.066402  424928 fix.go:54] fixHost starting: 
	I0916 19:07:01.066872  424928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 19:07:01.066928  424928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 19:07:01.084732  424928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0916 19:07:01.085208  424928 main.go:141] libmachine: () Calling .GetVersion
	I0916 19:07:01.085868  424928 main.go:141] libmachine: Using API Version  1
	I0916 19:07:01.085894  424928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 19:07:01.086308  424928 main.go:141] libmachine: () Calling .GetMachineName
	I0916 19:07:01.086522  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:01.086674  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetState
	I0916 19:07:01.088534  424928 fix.go:112] recreateIfNeeded on kubernetes-upgrade-698346: state=Running err=<nil>
	W0916 19:07:01.088553  424928 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 19:07:01.090613  424928 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-698346" VM ...
	I0916 19:06:59.518059  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.518618  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has current primary IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.518645  424613 main.go:141] libmachine: (old-k8s-version-923816) Found IP for machine: 192.168.39.46
	I0916 19:06:59.518657  424613 main.go:141] libmachine: (old-k8s-version-923816) Reserving static IP address...
	I0916 19:06:59.519018  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-923816", mac: "52:54:00:55:8d:ae", ip: "192.168.39.46"} in network mk-old-k8s-version-923816
	I0916 19:06:59.597845  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Getting to WaitForSSH function...
	I0916 19:06:59.597880  424613 main.go:141] libmachine: (old-k8s-version-923816) Reserved static IP address: 192.168.39.46
	I0916 19:06:59.597894  424613 main.go:141] libmachine: (old-k8s-version-923816) Waiting for SSH to be available...
	I0916 19:06:59.600991  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.601417  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:minikube Clientid:01:52:54:00:55:8d:ae}
	I0916 19:06:59.601438  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.601624  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Using SSH client type: external
	I0916 19:06:59.601646  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Using SSH private key: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa (-rw-------)
	I0916 19:06:59.601673  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.46 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0916 19:06:59.601686  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | About to run SSH command:
	I0916 19:06:59.601698  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | exit 0
	I0916 19:06:59.733246  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | SSH cmd err, output: <nil>: 
	I0916 19:06:59.733500  424613 main.go:141] libmachine: (old-k8s-version-923816) KVM machine creation complete!
	I0916 19:06:59.733917  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetConfigRaw
	I0916 19:06:59.734538  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:06:59.734734  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:06:59.734879  424613 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0916 19:06:59.734894  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetState
	I0916 19:06:59.736306  424613 main.go:141] libmachine: Detecting operating system of created instance...
	I0916 19:06:59.736325  424613 main.go:141] libmachine: Waiting for SSH to be available...
	I0916 19:06:59.736332  424613 main.go:141] libmachine: Getting to WaitForSSH function...
	I0916 19:06:59.736347  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:06:59.738922  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.739310  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:06:59.739344  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.739541  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:06:59.739748  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:06:59.739903  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:06:59.740054  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:06:59.740218  424613 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:59.740543  424613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0916 19:06:59.740569  424613 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0916 19:06:59.856715  424613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:06:59.856740  424613 main.go:141] libmachine: Detecting the provisioner...
	I0916 19:06:59.856749  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:06:59.859570  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.859995  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:06:59.860021  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.860288  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:06:59.860536  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:06:59.860721  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:06:59.860869  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:06:59.861117  424613 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:59.861345  424613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0916 19:06:59.861384  424613 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0916 19:06:59.973911  424613 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0916 19:06:59.973989  424613 main.go:141] libmachine: found compatible host: buildroot
	I0916 19:06:59.974000  424613 main.go:141] libmachine: Provisioning with buildroot...
	I0916 19:06:59.974011  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetMachineName
	I0916 19:06:59.974257  424613 buildroot.go:166] provisioning hostname "old-k8s-version-923816"
	I0916 19:06:59.974288  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetMachineName
	I0916 19:06:59.974508  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:06:59.977356  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.977746  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:06:59.977779  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:06:59.977856  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:06:59.978056  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:06:59.978236  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:06:59.978387  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:06:59.978533  424613 main.go:141] libmachine: Using SSH client type: native
	I0916 19:06:59.978722  424613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0916 19:06:59.978737  424613 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-923816 && echo "old-k8s-version-923816" | sudo tee /etc/hostname
	I0916 19:07:00.110261  424613 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-923816
	
	I0916 19:07:00.110294  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:00.113792  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.114082  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.114128  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.114296  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:00.114517  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.114724  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.114946  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:00.115130  424613 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:00.115329  424613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0916 19:07:00.115345  424613 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-923816' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-923816/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-923816' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 19:07:00.239376  424613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:07:00.239424  424613 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 19:07:00.239450  424613 buildroot.go:174] setting up certificates
	I0916 19:07:00.239460  424613 provision.go:84] configureAuth start
	I0916 19:07:00.239468  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetMachineName
	I0916 19:07:00.239813  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetIP
	I0916 19:07:00.242886  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.243411  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.243448  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.243534  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:00.246237  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.246588  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.246614  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.246753  424613 provision.go:143] copyHostCerts
	I0916 19:07:00.246818  424613 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 19:07:00.246831  424613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 19:07:00.246905  424613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 19:07:00.247030  424613 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 19:07:00.247042  424613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 19:07:00.247071  424613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 19:07:00.247177  424613 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 19:07:00.247189  424613 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 19:07:00.247217  424613 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 19:07:00.247316  424613 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-923816 san=[127.0.0.1 192.168.39.46 localhost minikube old-k8s-version-923816]
	I0916 19:07:00.375311  424613 provision.go:177] copyRemoteCerts
	I0916 19:07:00.375386  424613 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 19:07:00.375418  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:00.378633  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.379102  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.379185  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.379474  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:00.379797  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.380028  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:00.380197  424613 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa Username:docker}
	I0916 19:07:00.469841  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 19:07:00.496874  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 19:07:00.524902  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 19:07:00.552139  424613 provision.go:87] duration metric: took 312.665132ms to configureAuth
	I0916 19:07:00.552170  424613 buildroot.go:189] setting minikube options for container-runtime
	I0916 19:07:00.552371  424613 config.go:182] Loaded profile config "old-k8s-version-923816": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0916 19:07:00.552456  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:00.555505  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.555927  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.555966  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.556347  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:00.556592  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.556800  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.557001  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:00.557164  424613 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:00.557356  424613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0916 19:07:00.557374  424613 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 19:07:00.802459  424613 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 19:07:00.802489  424613 main.go:141] libmachine: Checking connection to Docker...
	I0916 19:07:00.802498  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetURL
	I0916 19:07:00.803858  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | Using libvirt version 6000000
	I0916 19:07:00.806688  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.807071  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.807098  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.807294  424613 main.go:141] libmachine: Docker is up and running!
	I0916 19:07:00.807308  424613 main.go:141] libmachine: Reticulating splines...
	I0916 19:07:00.807317  424613 client.go:171] duration metric: took 24.537838787s to LocalClient.Create
	I0916 19:07:00.807352  424613 start.go:167] duration metric: took 24.53792152s to libmachine.API.Create "old-k8s-version-923816"
	I0916 19:07:00.807366  424613 start.go:293] postStartSetup for "old-k8s-version-923816" (driver="kvm2")
	I0916 19:07:00.807380  424613 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 19:07:00.807401  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:07:00.807671  424613 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 19:07:00.807709  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:00.810419  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.810873  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.810895  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.811167  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:00.811344  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.811540  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:00.811678  424613 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa Username:docker}
	I0916 19:07:00.900118  424613 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 19:07:00.905198  424613 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 19:07:00.905241  424613 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 19:07:00.905321  424613 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 19:07:00.905449  424613 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 19:07:00.905591  424613 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 19:07:00.916418  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:07:00.942717  424613 start.go:296] duration metric: took 135.331561ms for postStartSetup
	I0916 19:07:00.942782  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetConfigRaw
	I0916 19:07:00.943432  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetIP
	I0916 19:07:00.946015  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.946395  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.946418  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.946697  424613 profile.go:143] Saving config to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/config.json ...
	I0916 19:07:00.946895  424613 start.go:128] duration metric: took 24.700317467s to createHost
	I0916 19:07:00.946921  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:00.949202  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.949536  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:00.949563  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:00.949723  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:00.949927  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.950095  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:00.950257  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:00.950423  424613 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:00.950641  424613 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I0916 19:07:00.950664  424613 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 19:07:01.066177  424613 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726513621.047807379
	
	I0916 19:07:01.066208  424613 fix.go:216] guest clock: 1726513621.047807379
	I0916 19:07:01.066219  424613 fix.go:229] Guest: 2024-09-16 19:07:01.047807379 +0000 UTC Remote: 2024-09-16 19:07:00.946908018 +0000 UTC m=+49.880257549 (delta=100.899361ms)
	I0916 19:07:01.066246  424613 fix.go:200] guest clock delta is within tolerance: 100.899361ms
	I0916 19:07:01.066256  424613 start.go:83] releasing machines lock for "old-k8s-version-923816", held for 24.81990067s
	I0916 19:07:01.066301  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:07:01.066611  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetIP
	I0916 19:07:01.069672  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:01.070122  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:01.070163  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:01.070292  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:07:01.070804  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:07:01.071013  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .DriverName
	I0916 19:07:01.071131  424613 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 19:07:01.071179  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:01.071295  424613 ssh_runner.go:195] Run: cat /version.json
	I0916 19:07:01.071321  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHHostname
	I0916 19:07:01.074188  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:01.074418  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:01.074723  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:01.074751  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:01.074782  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:01.074797  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:01.074944  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:01.075055  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHPort
	I0916 19:07:01.075165  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:01.075183  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHKeyPath
	I0916 19:07:01.075318  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:01.075391  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetSSHUsername
	I0916 19:07:01.075527  424613 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa Username:docker}
	I0916 19:07:01.075538  424613 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/old-k8s-version-923816/id_rsa Username:docker}
	I0916 19:07:01.182058  424613 ssh_runner.go:195] Run: systemctl --version
	I0916 19:07:01.191896  424613 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 19:07:01.366178  424613 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 19:07:01.373433  424613 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 19:07:01.373523  424613 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 19:07:01.393941  424613 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0916 19:07:01.393966  424613 start.go:495] detecting cgroup driver to use...
	I0916 19:07:01.394041  424613 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 19:07:01.411789  424613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 19:07:01.429271  424613 docker.go:217] disabling cri-docker service (if available) ...
	I0916 19:07:01.429361  424613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 19:07:01.445780  424613 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 19:07:01.461286  424613 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 19:07:01.587007  424613 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 19:07:01.764515  424613 docker.go:233] disabling docker service ...
	I0916 19:07:01.764590  424613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 19:07:01.784750  424613 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 19:07:01.800005  424613 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 19:07:01.926401  424613 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 19:07:02.048346  424613 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 19:07:02.063962  424613 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:07:02.085424  424613 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0916 19:07:02.085491  424613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:02.096321  424613 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 19:07:02.096408  424613 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:02.107015  424613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:02.117393  424613 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:02.127984  424613 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 19:07:02.139737  424613 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 19:07:02.149803  424613 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0916 19:07:02.149864  424613 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0916 19:07:02.163853  424613 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 19:07:02.178166  424613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:07:02.310021  424613 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 19:07:02.431815  424613 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 19:07:02.431904  424613 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 19:07:02.437138  424613 start.go:563] Will wait 60s for crictl version
	I0916 19:07:02.437199  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:02.441641  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 19:07:02.484697  424613 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 19:07:02.484807  424613 ssh_runner.go:195] Run: crio --version
	I0916 19:07:02.515366  424613 ssh_runner.go:195] Run: crio --version
	I0916 19:07:02.548592  424613 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0916 19:06:57.934744  424211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:06:58.136053  424211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 19:06:58.164056  424211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:06:58.652753  424211 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0916 19:06:58.654802  424211 node_ready.go:35] waiting up to 6m0s for node "pause-671192" to be "Ready" ...
	I0916 19:06:58.672945  424211 node_ready.go:49] node "pause-671192" has status "Ready":"True"
	I0916 19:06:58.672960  424211 node_ready.go:38] duration metric: took 18.134765ms for node "pause-671192" to be "Ready" ...
	I0916 19:06:58.672970  424211 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 19:06:58.683748  424211 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace to be "Ready" ...
	I0916 19:06:59.160894  424211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "pause-671192" context rescaled to 1 replicas
	I0916 19:07:00.693297  424211 pod_ready.go:103] pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace has status "Ready":"False"
	I0916 19:07:02.693552  424211 pod_ready.go:103] pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace has status "Ready":"False"
	I0916 19:07:01.092122  424928 machine.go:93] provisionDockerMachine start ...
	I0916 19:07:01.092151  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:01.092380  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:01.095099  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.095636  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.095674  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.095832  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:01.096031  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.096192  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.096354  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:01.096540  424928 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:01.096811  424928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:07:01.096833  424928 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 19:07:01.218118  424928 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-698346
	
	I0916 19:07:01.218168  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:07:01.218418  424928 buildroot.go:166] provisioning hostname "kubernetes-upgrade-698346"
	I0916 19:07:01.218453  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:07:01.218613  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:01.221749  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.222240  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.222283  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.222417  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:01.222620  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.222793  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.223004  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:01.223231  424928 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:01.223469  424928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:07:01.223485  424928 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-698346 && echo "kubernetes-upgrade-698346" | sudo tee /etc/hostname
	I0916 19:07:01.361021  424928 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-698346
	
	I0916 19:07:01.361058  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:01.364181  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.364639  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.364676  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.364954  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:01.365173  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.365317  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.365509  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:01.365774  424928 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:01.365984  424928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:07:01.366009  424928 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-698346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-698346/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-698346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 19:07:01.483017  424928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 19:07:01.483075  424928 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19649-371203/.minikube CaCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19649-371203/.minikube}
	I0916 19:07:01.483105  424928 buildroot.go:174] setting up certificates
	I0916 19:07:01.483116  424928 provision.go:84] configureAuth start
	I0916 19:07:01.483133  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetMachineName
	I0916 19:07:01.483482  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:07:01.487279  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.487748  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.487788  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.488087  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:01.490718  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.491116  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.491161  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.491289  424928 provision.go:143] copyHostCerts
	I0916 19:07:01.491367  424928 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem, removing ...
	I0916 19:07:01.491382  424928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem
	I0916 19:07:01.491454  424928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/ca.pem (1078 bytes)
	I0916 19:07:01.491586  424928 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem, removing ...
	I0916 19:07:01.491596  424928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem
	I0916 19:07:01.491619  424928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/cert.pem (1123 bytes)
	I0916 19:07:01.491698  424928 exec_runner.go:144] found /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem, removing ...
	I0916 19:07:01.491705  424928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem
	I0916 19:07:01.491723  424928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19649-371203/.minikube/key.pem (1679 bytes)
	I0916 19:07:01.491796  424928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-698346 san=[127.0.0.1 192.168.50.23 kubernetes-upgrade-698346 localhost minikube]
	I0916 19:07:01.609961  424928 provision.go:177] copyRemoteCerts
	I0916 19:07:01.610027  424928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 19:07:01.610058  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:01.613003  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.613411  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.613451  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.613613  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:01.613835  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.614013  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:01.614164  424928 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:07:01.706985  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 19:07:01.736740  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0916 19:07:01.764055  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 19:07:01.790455  424928 provision.go:87] duration metric: took 307.319507ms to configureAuth
	I0916 19:07:01.790494  424928 buildroot.go:189] setting minikube options for container-runtime
	I0916 19:07:01.790756  424928 config.go:182] Loaded profile config "kubernetes-upgrade-698346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 19:07:01.790904  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:01.793935  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.794360  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:01.794392  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:01.794604  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:01.794804  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.794984  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:01.795148  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:01.795372  424928 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:01.795629  424928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:07:01.795651  424928 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0916 19:07:02.549978  424613 main.go:141] libmachine: (old-k8s-version-923816) Calling .GetIP
	I0916 19:07:02.552743  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:02.553127  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:8d:ae", ip: ""} in network mk-old-k8s-version-923816: {Iface:virbr4 ExpiryTime:2024-09-16 20:06:52 +0000 UTC Type:0 Mac:52:54:00:55:8d:ae Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:old-k8s-version-923816 Clientid:01:52:54:00:55:8d:ae}
	I0916 19:07:02.553154  424613 main.go:141] libmachine: (old-k8s-version-923816) DBG | domain old-k8s-version-923816 has defined IP address 192.168.39.46 and MAC address 52:54:00:55:8d:ae in network mk-old-k8s-version-923816
	I0916 19:07:02.553471  424613 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0916 19:07:02.558082  424613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:07:02.572061  424613 kubeadm.go:883] updating cluster {Name:old-k8s-version-923816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-923816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 19:07:02.572171  424613 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 19:07:02.572212  424613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:07:02.606349  424613 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 19:07:02.606462  424613 ssh_runner.go:195] Run: which lz4
	I0916 19:07:02.611156  424613 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 19:07:02.615495  424613 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 19:07:02.615526  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0916 19:07:04.367946  424613 crio.go:462] duration metric: took 1.756817801s to copy over tarball
	I0916 19:07:04.368033  424613 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 19:07:05.190989  424211 pod_ready.go:103] pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace has status "Ready":"False"
	I0916 19:07:07.191186  424211 pod_ready.go:103] pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace has status "Ready":"False"
	I0916 19:07:07.871958  424928 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0916 19:07:07.871989  424928 machine.go:96] duration metric: took 6.77984946s to provisionDockerMachine
	I0916 19:07:07.872005  424928 start.go:293] postStartSetup for "kubernetes-upgrade-698346" (driver="kvm2")
	I0916 19:07:07.872019  424928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 19:07:07.872044  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:07.872428  424928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 19:07:07.872463  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:07.875756  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:07.876219  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:07.876252  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:07.876460  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:07.876675  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:07.876850  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:07.877039  424928 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:07:07.968046  424928 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 19:07:07.973675  424928 info.go:137] Remote host: Buildroot 2023.02.9
	I0916 19:07:07.973707  424928 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/addons for local assets ...
	I0916 19:07:07.973794  424928 filesync.go:126] Scanning /home/jenkins/minikube-integration/19649-371203/.minikube/files for local assets ...
	I0916 19:07:07.973889  424928 filesync.go:149] local asset: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem -> 3784632.pem in /etc/ssl/certs
	I0916 19:07:07.974004  424928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 19:07:07.984337  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:07:08.019875  424928 start.go:296] duration metric: took 147.854272ms for postStartSetup
	I0916 19:07:08.019923  424928 fix.go:56] duration metric: took 6.953522315s for fixHost
	I0916 19:07:08.019951  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:08.023078  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.023458  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:08.023495  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.023684  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:08.023927  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:08.024145  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:08.024277  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:08.024502  424928 main.go:141] libmachine: Using SSH client type: native
	I0916 19:07:08.024739  424928 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 192.168.50.23 22 <nil> <nil>}
	I0916 19:07:08.024753  424928 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0916 19:07:08.141775  424928 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726513628.134936540
	
	I0916 19:07:08.141802  424928 fix.go:216] guest clock: 1726513628.134936540
	I0916 19:07:08.141812  424928 fix.go:229] Guest: 2024-09-16 19:07:08.13493654 +0000 UTC Remote: 2024-09-16 19:07:08.019928695 +0000 UTC m=+32.631471678 (delta=115.007845ms)
	I0916 19:07:08.141840  424928 fix.go:200] guest clock delta is within tolerance: 115.007845ms
	I0916 19:07:08.141847  424928 start.go:83] releasing machines lock for "kubernetes-upgrade-698346", held for 7.0754858s
	I0916 19:07:08.141870  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:08.142118  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:07:08.144793  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.145170  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:08.145199  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.145326  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:08.145846  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:08.146049  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .DriverName
	I0916 19:07:08.146200  424928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 19:07:08.146260  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:08.146279  424928 ssh_runner.go:195] Run: cat /version.json
	I0916 19:07:08.146304  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHHostname
	I0916 19:07:08.148687  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.148975  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.149117  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:08.149147  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.149245  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:08.149265  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:08.149283  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:08.149467  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:08.149532  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHPort
	I0916 19:07:08.149703  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:08.149734  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHKeyPath
	I0916 19:07:08.149798  424928 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:07:08.149870  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetSSHUsername
	I0916 19:07:08.149996  424928 sshutil.go:53] new ssh client: &{IP:192.168.50.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/kubernetes-upgrade-698346/id_rsa Username:docker}
	I0916 19:07:08.251060  424928 ssh_runner.go:195] Run: systemctl --version
	I0916 19:07:08.258586  424928 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0916 19:07:08.420442  424928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0916 19:07:08.426664  424928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0916 19:07:08.426727  424928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 19:07:08.437092  424928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 19:07:08.437138  424928 start.go:495] detecting cgroup driver to use...
	I0916 19:07:08.437222  424928 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0916 19:07:08.458404  424928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0916 19:07:08.478143  424928 docker.go:217] disabling cri-docker service (if available) ...
	I0916 19:07:08.478239  424928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 19:07:08.497741  424928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 19:07:08.514341  424928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 19:07:08.694521  424928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 19:07:08.884532  424928 docker.go:233] disabling docker service ...
	I0916 19:07:08.884609  424928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 19:07:08.907602  424928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 19:07:08.923846  424928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 19:07:09.078338  424928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 19:07:09.237448  424928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 19:07:09.257118  424928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 19:07:09.279944  424928 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0916 19:07:09.280017  424928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.292270  424928 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0916 19:07:09.292350  424928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.303771  424928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.316790  424928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.328252  424928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 19:07:09.339840  424928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.351426  424928 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.364480  424928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0916 19:07:09.379529  424928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 19:07:09.391887  424928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 19:07:09.402715  424928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:07:09.600833  424928 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0916 19:07:10.834446  424928 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.233567832s)
	I0916 19:07:10.834484  424928 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0916 19:07:10.834530  424928 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0916 19:07:10.841151  424928 start.go:563] Will wait 60s for crictl version
	I0916 19:07:10.841216  424928 ssh_runner.go:195] Run: which crictl
	I0916 19:07:10.845532  424928 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 19:07:10.891666  424928 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0916 19:07:10.891779  424928 ssh_runner.go:195] Run: crio --version
	I0916 19:07:10.928241  424928 ssh_runner.go:195] Run: crio --version
	I0916 19:07:10.961278  424928 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0916 19:07:06.971015  424613 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.602943807s)
	I0916 19:07:06.971055  424613 crio.go:469] duration metric: took 2.603074258s to extract the tarball
	I0916 19:07:06.971066  424613 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 19:07:07.015891  424613 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:07:07.069570  424613 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 19:07:07.069603  424613 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 19:07:07.069709  424613 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:07:07.069800  424613 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:07.069827  424613 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.069737  424613 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.069712  424613 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:07.069757  424613 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.069787  424613 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 19:07:07.069840  424613 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 19:07:07.071299  424613 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:07.071509  424613 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.071520  424613 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:07:07.071556  424613 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 19:07:07.071602  424613 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:07.071671  424613 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.071750  424613 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.071852  424613 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 19:07:07.283794  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0916 19:07:07.336830  424613 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 19:07:07.336883  424613 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 19:07:07.336954  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.342373  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 19:07:07.344147  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.344977  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.363862  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.366678  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:07.390944  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0916 19:07:07.441594  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 19:07:07.492735  424613 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 19:07:07.492888  424613 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.492985  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.492823  424613 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 19:07:07.493064  424613 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.493137  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.521202  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:07.564592  424613 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 19:07:07.564651  424613 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.564712  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.578679  424613 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 19:07:07.578731  424613 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:07.578738  424613 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 19:07:07.578771  424613 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 19:07:07.578785  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.578816  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.578874  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 19:07:07.578936  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.578958  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.655712  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.655723  424613 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 19:07:07.655827  424613 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:07.655858  424613 ssh_runner.go:195] Run: which crictl
	I0916 19:07:07.685275  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:07.715157  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.715207  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 19:07:07.715157  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 19:07:07.715156  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.738818  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:07.738909  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.790823  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:07.876643  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 19:07:07.876718  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 19:07:07.876781  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 19:07:07.876873  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:07.884546  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 19:07:07.971143  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 19:07:08.017282  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 19:07:08.017344  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 19:07:08.017464  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 19:07:08.083170  424613 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 19:07:08.083174  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 19:07:08.083178  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 19:07:08.083271  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 19:07:08.128995  424613 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 19:07:08.204035  424613 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 19:07:08.348417  424613 cache_images.go:92] duration metric: took 1.278775166s to LoadCachedImages
	W0916 19:07:08.348544  424613 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19649-371203/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0916 19:07:08.348564  424613 kubeadm.go:934] updating node { 192.168.39.46 8443 v1.20.0 crio true true} ...
	I0916 19:07:08.348708  424613 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-923816 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-923816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 19:07:08.348802  424613 ssh_runner.go:195] Run: crio config
	I0916 19:07:08.406460  424613 cni.go:84] Creating CNI manager for ""
	I0916 19:07:08.406487  424613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:07:08.406501  424613 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 19:07:08.406525  424613 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.46 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-923816 NodeName:old-k8s-version-923816 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 19:07:08.406710  424613 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-923816"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.46"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 19:07:08.406790  424613 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 19:07:08.419091  424613 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 19:07:08.419176  424613 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 19:07:08.430797  424613 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0916 19:07:08.451085  424613 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 19:07:08.473693  424613 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0916 19:07:08.496630  424613 ssh_runner.go:195] Run: grep 192.168.39.46	control-plane.minikube.internal$ /etc/hosts
	I0916 19:07:08.501214  424613 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.46	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 19:07:08.516683  424613 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:07:08.663747  424613 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:07:08.687061  424613 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816 for IP: 192.168.39.46
	I0916 19:07:08.687090  424613 certs.go:194] generating shared ca certs ...
	I0916 19:07:08.687112  424613 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:08.687343  424613 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 19:07:08.687418  424613 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 19:07:08.687431  424613 certs.go:256] generating profile certs ...
	I0916 19:07:08.687500  424613 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/client.key
	I0916 19:07:08.687519  424613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/client.crt with IP's: []
	I0916 19:07:09.358614  424613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/client.crt ...
	I0916 19:07:09.358651  424613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/client.crt: {Name:mkedbfc184e91209950d437550c4001f40dfe942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:09.358870  424613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/client.key ...
	I0916 19:07:09.358891  424613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/client.key: {Name:mk14996262366f7e680013cc28b16a40c7cea0be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:09.359002  424613 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.key.e7a805de
	I0916 19:07:09.359022  424613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.crt.e7a805de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.46]
	I0916 19:07:09.555628  424613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.crt.e7a805de ...
	I0916 19:07:09.555659  424613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.crt.e7a805de: {Name:mk2592eaa2568c3dad9148729baf0bd289ef5dc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:09.555825  424613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.key.e7a805de ...
	I0916 19:07:09.555838  424613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.key.e7a805de: {Name:mka6dbd2fb82f8ab2cb8924e7279ea7e14829fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:09.555907  424613 certs.go:381] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.crt.e7a805de -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.crt
	I0916 19:07:09.555980  424613 certs.go:385] copying /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.key.e7a805de -> /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.key
	I0916 19:07:09.556062  424613 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.key
	I0916 19:07:09.556080  424613 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.crt with IP's: []
	I0916 19:07:09.806139  424613 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.crt ...
	I0916 19:07:09.806172  424613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.crt: {Name:mk38a9aac21286a5ef6e4c9d1201b029aad548a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:09.806375  424613 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.key ...
	I0916 19:07:09.806390  424613 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.key: {Name:mk6f70327e1ac1ed92377f3a4e2e6a80bf1f8624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:09.806607  424613 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 19:07:09.806658  424613 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 19:07:09.806673  424613 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 19:07:09.806703  424613 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 19:07:09.806734  424613 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 19:07:09.806767  424613 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 19:07:09.806819  424613 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:07:09.807607  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 19:07:09.845895  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 19:07:09.870996  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 19:07:09.895623  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 19:07:09.920808  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 19:07:09.957547  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 19:07:09.984338  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 19:07:10.017239  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/old-k8s-version-923816/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 19:07:10.090585  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 19:07:10.119892  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 19:07:10.147631  424613 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 19:07:10.173882  424613 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 19:07:10.191821  424613 ssh_runner.go:195] Run: openssl version
	I0916 19:07:10.198164  424613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 19:07:10.209693  424613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:07:10.214657  424613 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:07:10.214724  424613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:07:10.220946  424613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 19:07:10.231996  424613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 19:07:10.244440  424613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 19:07:10.249665  424613 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 19:07:10.249730  424613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 19:07:10.255990  424613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 19:07:10.267791  424613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 19:07:10.279712  424613 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 19:07:10.284706  424613 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 19:07:10.284772  424613 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 19:07:10.291131  424613 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 19:07:10.303549  424613 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 19:07:10.308587  424613 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 19:07:10.308690  424613 kubeadm.go:392] StartCluster: {Name:old-k8s-version-923816 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-923816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:07:10.308790  424613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 19:07:10.308841  424613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 19:07:10.352348  424613 cri.go:89] found id: ""
	I0916 19:07:10.352416  424613 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 19:07:10.362788  424613 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 19:07:10.373910  424613 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 19:07:10.388995  424613 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 19:07:10.389025  424613 kubeadm.go:157] found existing configuration files:
	
	I0916 19:07:10.389084  424613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 19:07:10.399272  424613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 19:07:10.399359  424613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 19:07:10.413577  424613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 19:07:10.424280  424613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 19:07:10.424348  424613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 19:07:10.435260  424613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 19:07:10.449266  424613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 19:07:10.449338  424613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 19:07:10.460340  424613 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 19:07:10.475254  424613 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 19:07:10.475341  424613 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 19:07:10.488859  424613 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0916 19:07:10.640892  424613 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 19:07:10.641033  424613 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 19:07:10.834754  424613 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 19:07:10.834899  424613 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 19:07:10.835062  424613 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 19:07:11.046104  424613 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 19:07:11.049168  424613 out.go:235]   - Generating certificates and keys ...
	I0916 19:07:11.049280  424613 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 19:07:11.049375  424613 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 19:07:09.191883  424211 pod_ready.go:103] pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace has status "Ready":"False"
	I0916 19:07:11.192137  424211 pod_ready.go:103] pod "coredns-7c65d6cfc9-rkff7" in "kube-system" namespace has status "Ready":"False"
	I0916 19:07:11.287468  424613 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 19:07:11.372424  424613 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 19:07:11.626893  424613 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 19:07:11.897944  424613 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 19:07:12.092036  424613 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 19:07:12.092366  424613 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-923816] and IPs [192.168.39.46 127.0.0.1 ::1]
	I0916 19:07:12.224530  424613 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 19:07:12.224889  424613 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-923816] and IPs [192.168.39.46 127.0.0.1 ::1]
	I0916 19:07:12.369252  424613 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 19:07:12.589881  424613 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 19:07:12.690567  424613 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 19:07:12.690954  424613 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 19:07:12.750781  424613 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 19:07:13.193086  424613 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 19:07:13.264627  424613 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 19:07:13.564648  424613 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 19:07:13.583702  424613 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 19:07:13.585131  424613 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 19:07:13.585205  424613 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 19:07:13.722720  424613 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 19:07:10.962756  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) Calling .GetIP
	I0916 19:07:10.965983  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:10.966334  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:2a:df", ip: ""} in network mk-kubernetes-upgrade-698346: {Iface:virbr2 ExpiryTime:2024-09-16 20:06:02 +0000 UTC Type:0 Mac:52:54:00:fe:2a:df Iaid: IPaddr:192.168.50.23 Prefix:24 Hostname:kubernetes-upgrade-698346 Clientid:01:52:54:00:fe:2a:df}
	I0916 19:07:10.966382  424928 main.go:141] libmachine: (kubernetes-upgrade-698346) DBG | domain kubernetes-upgrade-698346 has defined IP address 192.168.50.23 and MAC address 52:54:00:fe:2a:df in network mk-kubernetes-upgrade-698346
	I0916 19:07:10.966583  424928 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0916 19:07:10.971425  424928 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 19:07:10.971559  424928 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 19:07:10.971624  424928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:07:11.021028  424928 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 19:07:11.021057  424928 crio.go:433] Images already preloaded, skipping extraction
	I0916 19:07:11.021110  424928 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 19:07:11.064241  424928 crio.go:514] all images are preloaded for cri-o runtime.
	I0916 19:07:11.064276  424928 cache_images.go:84] Images are preloaded, skipping loading
	I0916 19:07:11.064286  424928 kubeadm.go:934] updating node { 192.168.50.23 8443 v1.31.1 crio true true} ...
	I0916 19:07:11.064406  424928 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-698346 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 19:07:11.064493  424928 ssh_runner.go:195] Run: crio config
	I0916 19:07:11.132630  424928 cni.go:84] Creating CNI manager for ""
	I0916 19:07:11.132660  424928 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 19:07:11.132674  424928 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 19:07:11.132712  424928 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.23 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-698346 NodeName:kubernetes-upgrade-698346 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 19:07:11.132913  424928 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-698346"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 19:07:11.133023  424928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 19:07:11.143898  424928 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 19:07:11.143993  424928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 19:07:11.156366  424928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0916 19:07:11.175530  424928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 19:07:11.196863  424928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0916 19:07:11.216747  424928 ssh_runner.go:195] Run: grep 192.168.50.23	control-plane.minikube.internal$ /etc/hosts
	I0916 19:07:11.221114  424928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 19:07:11.366045  424928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 19:07:11.383128  424928 certs.go:68] Setting up /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346 for IP: 192.168.50.23
	I0916 19:07:11.383156  424928 certs.go:194] generating shared ca certs ...
	I0916 19:07:11.383179  424928 certs.go:226] acquiring lock for ca certs: {Name:mk40ba93daf4f43d05e0fbcdf660ca7c734d5c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 19:07:11.383360  424928 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key
	I0916 19:07:11.383418  424928 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key
	I0916 19:07:11.383431  424928 certs.go:256] generating profile certs ...
	I0916 19:07:11.383550  424928 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/client.key
	I0916 19:07:11.383620  424928 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key.edafc3c5
	I0916 19:07:11.383674  424928 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.key
	I0916 19:07:11.383831  424928 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem (1338 bytes)
	W0916 19:07:11.383876  424928 certs.go:480] ignoring /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463_empty.pem, impossibly tiny 0 bytes
	I0916 19:07:11.383888  424928 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 19:07:11.383924  424928 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/ca.pem (1078 bytes)
	I0916 19:07:11.383959  424928 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/cert.pem (1123 bytes)
	I0916 19:07:11.383995  424928 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/certs/key.pem (1679 bytes)
	I0916 19:07:11.384047  424928 certs.go:484] found cert: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem (1708 bytes)
	I0916 19:07:11.384973  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 19:07:11.413594  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0916 19:07:11.446238  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 19:07:11.473855  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0916 19:07:11.501480  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 19:07:11.529228  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 19:07:11.563924  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 19:07:11.639145  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/kubernetes-upgrade-698346/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 19:07:11.667628  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 19:07:11.702046  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/certs/378463.pem --> /usr/share/ca-certificates/378463.pem (1338 bytes)
	I0916 19:07:11.731297  424928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/ssl/certs/3784632.pem --> /usr/share/ca-certificates/3784632.pem (1708 bytes)
	I0916 19:07:11.777906  424928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 19:07:11.805234  424928 ssh_runner.go:195] Run: openssl version
	I0916 19:07:11.812725  424928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 19:07:11.826657  424928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:07:11.835434  424928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 17:25 /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:07:11.835517  424928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 19:07:11.843493  424928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 19:07:11.857117  424928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/378463.pem && ln -fs /usr/share/ca-certificates/378463.pem /etc/ssl/certs/378463.pem"
	I0916 19:07:11.870499  424928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/378463.pem
	I0916 19:07:11.875505  424928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 18:05 /usr/share/ca-certificates/378463.pem
	I0916 19:07:11.875565  424928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/378463.pem
	I0916 19:07:11.884106  424928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/378463.pem /etc/ssl/certs/51391683.0"
	I0916 19:07:11.900623  424928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3784632.pem && ln -fs /usr/share/ca-certificates/3784632.pem /etc/ssl/certs/3784632.pem"
	I0916 19:07:11.917690  424928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3784632.pem
	I0916 19:07:11.926676  424928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 18:05 /usr/share/ca-certificates/3784632.pem
	I0916 19:07:11.926767  424928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3784632.pem
	I0916 19:07:11.940311  424928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3784632.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 19:07:11.953278  424928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 19:07:11.958935  424928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 19:07:11.965737  424928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 19:07:11.972422  424928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 19:07:11.981580  424928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 19:07:11.992776  424928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 19:07:11.999248  424928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 19:07:12.006084  424928 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-698346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-698346 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.23 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 19:07:12.006170  424928 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0916 19:07:12.006247  424928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 19:07:12.083166  424928 cri.go:89] found id: "2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab"
	I0916 19:07:12.083198  424928 cri.go:89] found id: "a9520d45639da1e4aee3164f3083a708f95bb2dc8d9f957f980773239724e045"
	I0916 19:07:12.083204  424928 cri.go:89] found id: "043e84088ead119469f99197400b2fdb63afc5f572946d59d45935faf971c712"
	I0916 19:07:12.083227  424928 cri.go:89] found id: "c3b57d5a3eb7b087143cc3b9bb0396531a406d3ff0a471c01b9a0e820125e11c"
	I0916 19:07:12.083232  424928 cri.go:89] found id: "27801dacf3dbf6061f4a9719bd055ac3d27309b110c655b11de4b4d016345f06"
	I0916 19:07:12.083236  424928 cri.go:89] found id: "6a523e3d7effaf770aad018267b7a8cc85ef889e6cc727a9d3ff1387ea2240bb"
	I0916 19:07:12.083240  424928 cri.go:89] found id: "e5ef68bf900e6d144b7ee6b6a2003b5f0ff5b08a0af2022b548329ad8f425493"
	I0916 19:07:12.083244  424928 cri.go:89] found id: "b5aefdd252a8aefe84102101d3c14500df060ec77ab28ea1acf075f5a49e782d"
	I0916 19:07:12.083247  424928 cri.go:89] found id: "9f2cd65018cc6fd770e2d1b4f44f66d5a7ce62fc67ade48b8173843997cdf052"
	I0916 19:07:12.083255  424928 cri.go:89] found id: ""
	I0916 19:07:12.083308  424928 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.865351764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726513650865058492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=698b4fa1-4010-4b61-8373-1354d4d4d23c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.865887879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd8ade9b-800d-4faf-abef-e2bdea5ec3e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.865956196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd8ade9b-800d-4faf-abef-e2bdea5ec3e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.866336047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e03dd04563a12957ce60b0861b4bd5ddcfb72d0c22d95a67ae1ad07ffb08fbf2,PodSandboxId:2dbfed7cac0b17471e7f941aed4ead09767104d5606ba5da214795fb21a95196,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726513647783955543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa021393702ec1f35d5a3963d97238f726f8ecf49bef60ae8773d9766e61007,PodSandboxId:ecfe18b1d4c3df9b3d5ee7e22e16ea677f3e830602829c2a455dd0a507224454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726513647780678955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca526cb6c3184ea59bb63dc346817a89684a8af5445e1b939bd337256a4f21,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726513644798372453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63753dad6e3a4be2981d3c744a6cb353ebc44b199c449fe213dac1a2fa7dff9,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726513644792498924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9504545dc60c14b41ff27d5bbd37a96311a28b21656f7667f7b94b97b50772ad,PodSandboxId:c5bee8e360282ab829a4950bf7247074a1e74a78f5df68fec211d4ebdfe8d851,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634473065261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c06fc2bf6170f21c782f569a3430d912dc6e4fd76406200b015d23a4b6ad2fd,PodSandboxId:9886277f713a0a47d6f3c1c16cf06618ef4a2c7aee4741d914d784a494a0257d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634431991471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-
a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8593eb39609311907072b49ef4b002f0fff3b3a0cfd2d38c75e10470eb7d3,PodSandboxId:4417969b235f364e5c31619660a3298b7c50bae092a831033483e3affcc7d1ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726513632995
339320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff46d98ece866070dd298619a6c7b67d97e2de47ee69b25dd63394f918c6cf78,PodSandboxId:e342029a47f13f01cc76e3db0c9fb34d79ffbd5fe489a14cebfcae0288b91f77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726513632300313343,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4978f9e969ae29bb56f9a2ff6d377c1bcd2cd823bcf857cf0fdf3b42d89efab,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726513632037016590,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726513631890061453,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9520d45639da1e4aee3164f3083a708f95bb2dc8d9f957f980773239724e045,PodSandboxId:3666485ac6c23bb41b1da8a9b00774ead9a4e08a04efbf993a6c62e332db22e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726513595585391860,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:043e84088ead119469f99197400b2fdb63afc5f572946d59d45935faf971c712,PodSandboxId:a83ceed470c992752999c2baf06b924a937c96ee4fef085b870e137b2513d22f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513595019821859,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b57d5a3eb7b087143cc3b9bb0396531a406d3ff0a471c01b9a0e820125e11c,PodSandboxId:51751b2f1e52ed72112be83cf0cf61151d1e1d2d3ec7a1630afd5015f15816e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513594969067040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27801dacf3dbf6061f4a9719bd055ac3d27309b110c655b11de4b4d016345f06,PodSandboxId:4ca92f5a23196e6b00476749d9953e8fb7e8a65421a8
2c8e88e209b3e7917acb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726513594486851133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a523e3d7effaf770aad018267b7a8cc85ef889e6cc727a9d3ff1387ea2240bb,PodSandboxId:95020f37831167234e12b51d85f3c5469131e54dd5717c74116bd560910dc569,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726513580549952548,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5aefdd252a8aefe84102101d3c14500df060ec77ab28ea1acf075f5a49e782d,PodSandboxId:5d731282ef6e898fd51cd447d5f6e5da06f20871e5765e6d8b1b4c4663b2a235,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726513580519591990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd8ade9b-800d-4faf-abef-e2bdea5ec3e7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.913877435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cde33861-79d1-4606-af62-288296a0a79c name=/runtime.v1.RuntimeService/Version
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.913953844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cde33861-79d1-4606-af62-288296a0a79c name=/runtime.v1.RuntimeService/Version
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.915064374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02281f4e-4d04-4b59-b039-b3711935662f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.915491889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726513650915469986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02281f4e-4d04-4b59-b039-b3711935662f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.916373156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92eae488-784c-484e-8da4-04bc715cf369 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.916425665Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92eae488-784c-484e-8da4-04bc715cf369 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.916757702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e03dd04563a12957ce60b0861b4bd5ddcfb72d0c22d95a67ae1ad07ffb08fbf2,PodSandboxId:2dbfed7cac0b17471e7f941aed4ead09767104d5606ba5da214795fb21a95196,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726513647783955543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa021393702ec1f35d5a3963d97238f726f8ecf49bef60ae8773d9766e61007,PodSandboxId:ecfe18b1d4c3df9b3d5ee7e22e16ea677f3e830602829c2a455dd0a507224454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726513647780678955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca526cb6c3184ea59bb63dc346817a89684a8af5445e1b939bd337256a4f21,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726513644798372453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63753dad6e3a4be2981d3c744a6cb353ebc44b199c449fe213dac1a2fa7dff9,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726513644792498924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9504545dc60c14b41ff27d5bbd37a96311a28b21656f7667f7b94b97b50772ad,PodSandboxId:c5bee8e360282ab829a4950bf7247074a1e74a78f5df68fec211d4ebdfe8d851,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634473065261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c06fc2bf6170f21c782f569a3430d912dc6e4fd76406200b015d23a4b6ad2fd,PodSandboxId:9886277f713a0a47d6f3c1c16cf06618ef4a2c7aee4741d914d784a494a0257d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634431991471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-
a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8593eb39609311907072b49ef4b002f0fff3b3a0cfd2d38c75e10470eb7d3,PodSandboxId:4417969b235f364e5c31619660a3298b7c50bae092a831033483e3affcc7d1ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726513632995
339320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff46d98ece866070dd298619a6c7b67d97e2de47ee69b25dd63394f918c6cf78,PodSandboxId:e342029a47f13f01cc76e3db0c9fb34d79ffbd5fe489a14cebfcae0288b91f77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726513632300313343,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4978f9e969ae29bb56f9a2ff6d377c1bcd2cd823bcf857cf0fdf3b42d89efab,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726513632037016590,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726513631890061453,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9520d45639da1e4aee3164f3083a708f95bb2dc8d9f957f980773239724e045,PodSandboxId:3666485ac6c23bb41b1da8a9b00774ead9a4e08a04efbf993a6c62e332db22e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726513595585391860,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:043e84088ead119469f99197400b2fdb63afc5f572946d59d45935faf971c712,PodSandboxId:a83ceed470c992752999c2baf06b924a937c96ee4fef085b870e137b2513d22f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513595019821859,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b57d5a3eb7b087143cc3b9bb0396531a406d3ff0a471c01b9a0e820125e11c,PodSandboxId:51751b2f1e52ed72112be83cf0cf61151d1e1d2d3ec7a1630afd5015f15816e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513594969067040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27801dacf3dbf6061f4a9719bd055ac3d27309b110c655b11de4b4d016345f06,PodSandboxId:4ca92f5a23196e6b00476749d9953e8fb7e8a65421a8
2c8e88e209b3e7917acb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726513594486851133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a523e3d7effaf770aad018267b7a8cc85ef889e6cc727a9d3ff1387ea2240bb,PodSandboxId:95020f37831167234e12b51d85f3c5469131e54dd5717c74116bd560910dc569,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726513580549952548,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5aefdd252a8aefe84102101d3c14500df060ec77ab28ea1acf075f5a49e782d,PodSandboxId:5d731282ef6e898fd51cd447d5f6e5da06f20871e5765e6d8b1b4c4663b2a235,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726513580519591990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92eae488-784c-484e-8da4-04bc715cf369 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.963706167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d169eea7-533a-4ad8-b91e-faa4c97734d4 name=/runtime.v1.RuntimeService/Version
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.963782401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d169eea7-533a-4ad8-b91e-faa4c97734d4 name=/runtime.v1.RuntimeService/Version
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.965529870Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4ec7127-e399-4d01-925f-a0a4292ef226 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.965893742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726513650965869300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4ec7127-e399-4d01-925f-a0a4292ef226 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.966709049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b5b0f20-d20c-47b8-aa92-b08cf28509e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.966758437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b5b0f20-d20c-47b8-aa92-b08cf28509e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:30 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:30.967292577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e03dd04563a12957ce60b0861b4bd5ddcfb72d0c22d95a67ae1ad07ffb08fbf2,PodSandboxId:2dbfed7cac0b17471e7f941aed4ead09767104d5606ba5da214795fb21a95196,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726513647783955543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa021393702ec1f35d5a3963d97238f726f8ecf49bef60ae8773d9766e61007,PodSandboxId:ecfe18b1d4c3df9b3d5ee7e22e16ea677f3e830602829c2a455dd0a507224454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726513647780678955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca526cb6c3184ea59bb63dc346817a89684a8af5445e1b939bd337256a4f21,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726513644798372453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63753dad6e3a4be2981d3c744a6cb353ebc44b199c449fe213dac1a2fa7dff9,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726513644792498924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9504545dc60c14b41ff27d5bbd37a96311a28b21656f7667f7b94b97b50772ad,PodSandboxId:c5bee8e360282ab829a4950bf7247074a1e74a78f5df68fec211d4ebdfe8d851,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634473065261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c06fc2bf6170f21c782f569a3430d912dc6e4fd76406200b015d23a4b6ad2fd,PodSandboxId:9886277f713a0a47d6f3c1c16cf06618ef4a2c7aee4741d914d784a494a0257d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634431991471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-
a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8593eb39609311907072b49ef4b002f0fff3b3a0cfd2d38c75e10470eb7d3,PodSandboxId:4417969b235f364e5c31619660a3298b7c50bae092a831033483e3affcc7d1ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726513632995
339320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff46d98ece866070dd298619a6c7b67d97e2de47ee69b25dd63394f918c6cf78,PodSandboxId:e342029a47f13f01cc76e3db0c9fb34d79ffbd5fe489a14cebfcae0288b91f77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726513632300313343,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4978f9e969ae29bb56f9a2ff6d377c1bcd2cd823bcf857cf0fdf3b42d89efab,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726513632037016590,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726513631890061453,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9520d45639da1e4aee3164f3083a708f95bb2dc8d9f957f980773239724e045,PodSandboxId:3666485ac6c23bb41b1da8a9b00774ead9a4e08a04efbf993a6c62e332db22e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726513595585391860,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:043e84088ead119469f99197400b2fdb63afc5f572946d59d45935faf971c712,PodSandboxId:a83ceed470c992752999c2baf06b924a937c96ee4fef085b870e137b2513d22f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513595019821859,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b57d5a3eb7b087143cc3b9bb0396531a406d3ff0a471c01b9a0e820125e11c,PodSandboxId:51751b2f1e52ed72112be83cf0cf61151d1e1d2d3ec7a1630afd5015f15816e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513594969067040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27801dacf3dbf6061f4a9719bd055ac3d27309b110c655b11de4b4d016345f06,PodSandboxId:4ca92f5a23196e6b00476749d9953e8fb7e8a65421a8
2c8e88e209b3e7917acb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726513594486851133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a523e3d7effaf770aad018267b7a8cc85ef889e6cc727a9d3ff1387ea2240bb,PodSandboxId:95020f37831167234e12b51d85f3c5469131e54dd5717c74116bd560910dc569,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726513580549952548,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5aefdd252a8aefe84102101d3c14500df060ec77ab28ea1acf075f5a49e782d,PodSandboxId:5d731282ef6e898fd51cd447d5f6e5da06f20871e5765e6d8b1b4c4663b2a235,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726513580519591990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b5b0f20-d20c-47b8-aa92-b08cf28509e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.007038938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f273a30e-6450-42c5-8579-17beda4593b0 name=/runtime.v1.RuntimeService/Version
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.007176746Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f273a30e-6450-42c5-8579-17beda4593b0 name=/runtime.v1.RuntimeService/Version
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.008538342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af263820-61f6-4db9-aa2e-eebc4ba2bf92 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.009597037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726513651009490994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af263820-61f6-4db9-aa2e-eebc4ba2bf92 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.010541541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bc79f9e-249c-43d9-bdbe-c4338a5bd8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.010714374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bc79f9e-249c-43d9-bdbe-c4338a5bd8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 16 19:07:31 kubernetes-upgrade-698346 crio[2310]: time="2024-09-16 19:07:31.011155837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e03dd04563a12957ce60b0861b4bd5ddcfb72d0c22d95a67ae1ad07ffb08fbf2,PodSandboxId:2dbfed7cac0b17471e7f941aed4ead09767104d5606ba5da214795fb21a95196,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726513647783955543,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:efa021393702ec1f35d5a3963d97238f726f8ecf49bef60ae8773d9766e61007,PodSandboxId:ecfe18b1d4c3df9b3d5ee7e22e16ea677f3e830602829c2a455dd0a507224454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726513647780678955,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdca526cb6c3184ea59bb63dc346817a89684a8af5445e1b939bd337256a4f21,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726513644798372453,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d63753dad6e3a4be2981d3c744a6cb353ebc44b199c449fe213dac1a2fa7dff9,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726513644792498924,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9504545dc60c14b41ff27d5bbd37a96311a28b21656f7667f7b94b97b50772ad,PodSandboxId:c5bee8e360282ab829a4950bf7247074a1e74a78f5df68fec211d4ebdfe8d851,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634473065261,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"
name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c06fc2bf6170f21c782f569a3430d912dc6e4fd76406200b015d23a4b6ad2fd,PodSandboxId:9886277f713a0a47d6f3c1c16cf06618ef4a2c7aee4741d914d784a494a0257d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726513634431991471,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-
a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda8593eb39609311907072b49ef4b002f0fff3b3a0cfd2d38c75e10470eb7d3,PodSandboxId:4417969b235f364e5c31619660a3298b7c50bae092a831033483e3affcc7d1ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726513632995
339320,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff46d98ece866070dd298619a6c7b67d97e2de47ee69b25dd63394f918c6cf78,PodSandboxId:e342029a47f13f01cc76e3db0c9fb34d79ffbd5fe489a14cebfcae0288b91f77,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726513632300313343,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4978f9e969ae29bb56f9a2ff6d377c1bcd2cd823bcf857cf0fdf3b42d89efab,PodSandboxId:3b20aff67078763eca4f8d1e6798b4c0a4a8027016bba72a1731be8f8861c4b2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726513632037016590,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 903314c9353f124cadfe127e6b64c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab,PodSandboxId:99a820ca558f1941f8fac53611259ba6129f2d094c3d3d65b22ac4a41910ba94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726513631890061453,Lab
els:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efb8622e140a89220cbd9418c01e6a29,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9520d45639da1e4aee3164f3083a708f95bb2dc8d9f957f980773239724e045,PodSandboxId:3666485ac6c23bb41b1da8a9b00774ead9a4e08a04efbf993a6c62e332db22e2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726513595585391860,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08ca6e48-0ce3-4585-8a2b-c777b02f0616,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:043e84088ead119469f99197400b2fdb63afc5f572946d59d45935faf971c712,PodSandboxId:a83ceed470c992752999c2baf06b924a937c96ee4fef085b870e137b2513d22f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513595019821859,Labels:map[string]string{io.k
ubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-txhq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d7809df-a84b-473f-a346-29034b02f825,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3b57d5a3eb7b087143cc3b9bb0396531a406d3ff0a471c01b9a0e820125e11c,PodSandboxId:51751b2f1e52ed72112be83cf0cf61151d1e1d2d3ec7a1630afd5015f15816e8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726513594969067040,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-zzpvb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51a58b0f-521d-4378-ab87-eb36ae436178,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27801dacf3dbf6061f4a9719bd055ac3d27309b110c655b11de4b4d016345f06,PodSandboxId:4ca92f5a23196e6b00476749d9953e8fb7e8a65421a8
2c8e88e209b3e7917acb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726513594486851133,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-trcll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efbbef40-702c-4e00-a263-6934ed332ba8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a523e3d7effaf770aad018267b7a8cc85ef889e6cc727a9d3ff1387ea2240bb,PodSandboxId:95020f37831167234e12b51d85f3c5469131e54dd5717c74116bd560910dc569,Metadata:&Con
tainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726513580549952548,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f6680392a53be12cfabf5756bbfcd69,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5aefdd252a8aefe84102101d3c14500df060ec77ab28ea1acf075f5a49e782d,PodSandboxId:5d731282ef6e898fd51cd447d5f6e5da06f20871e5765e6d8b1b4c4663b2a235,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726513580519591990,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-698346,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f9f05a78147d3b6fb3080173ffd2c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bc79f9e-249c-43d9-bdbe-c4338a5bd8dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e03dd04563a12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       1                   2dbfed7cac0b1       storage-provisioner
	efa021393702e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   3 seconds ago        Running             kube-proxy                1                   ecfe18b1d4c3d       kube-proxy-trcll
	bdca526cb6c31       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   6 seconds ago        Running             kube-apiserver            2                   99a820ca558f1       kube-apiserver-kubernetes-upgrade-698346
	d63753dad6e3a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   6 seconds ago        Running             kube-controller-manager   2                   3b20aff670787       kube-controller-manager-kubernetes-upgrade-698346
	9504545dc60c1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago       Running             coredns                   1                   c5bee8e360282       coredns-7c65d6cfc9-zzpvb
	8c06fc2bf6170       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 seconds ago       Running             coredns                   1                   9886277f713a0       coredns-7c65d6cfc9-txhq9
	eda8593eb3960       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   18 seconds ago       Running             etcd                      1                   4417969b235f3       etcd-kubernetes-upgrade-698346
	ff46d98ece866       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   18 seconds ago       Running             kube-scheduler            1                   e342029a47f13       kube-scheduler-kubernetes-upgrade-698346
	f4978f9e969ae       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   19 seconds ago       Exited              kube-controller-manager   1                   3b20aff670787       kube-controller-manager-kubernetes-upgrade-698346
	2b66a5f9d7522       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 seconds ago       Exited              kube-apiserver            1                   99a820ca558f1       kube-apiserver-kubernetes-upgrade-698346
	a9520d45639da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   55 seconds ago       Exited              storage-provisioner       0                   3666485ac6c23       storage-provisioner
	043e84088ead1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   56 seconds ago       Exited              coredns                   0                   a83ceed470c99       coredns-7c65d6cfc9-txhq9
	c3b57d5a3eb7b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   56 seconds ago       Exited              coredns                   0                   51751b2f1e52e       coredns-7c65d6cfc9-zzpvb
	27801dacf3dbf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   56 seconds ago       Exited              kube-proxy                0                   4ca92f5a23196       kube-proxy-trcll
	6a523e3d7effa       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   95020f3783116       etcd-kubernetes-upgrade-698346
	b5aefdd252a8a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   5d731282ef6e8       kube-scheduler-kubernetes-upgrade-698346
	
	
	==> coredns [043e84088ead119469f99197400b2fdb63afc5f572946d59d45935faf971c712] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1213337264]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 19:06:35.480) (total time: 26466ms):
	Trace[1213337264]: [26.466435909s] [26.466435909s] END
	[INFO] plugin/kubernetes: Trace[1297098741]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 19:06:35.471) (total time: 26475ms):
	Trace[1297098741]: [26.475814531s] [26.475814531s] END
	[INFO] plugin/kubernetes: Trace[573068368]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 19:06:35.480) (total time: 26466ms):
	Trace[573068368]: [26.466602241s] [26.466602241s] END
	
	
	==> coredns [8c06fc2bf6170f21c782f569a3430d912dc6e4fd76406200b015d23a4b6ad2fd] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9504545dc60c14b41ff27d5bbd37a96311a28b21656f7667f7b94b97b50772ad] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [c3b57d5a3eb7b087143cc3b9bb0396531a406d3ff0a471c01b9a0e820125e11c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1757345105]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 19:06:35.503) (total time: 26441ms):
	Trace[1757345105]: [26.441861966s] [26.441861966s] END
	[INFO] plugin/kubernetes: Trace[1614883892]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 19:06:35.503) (total time: 26442ms):
	Trace[1614883892]: [26.442136992s] [26.442136992s] END
	[INFO] plugin/kubernetes: Trace[1436974467]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 19:06:35.502) (total time: 26442ms):
	Trace[1436974467]: [26.442808883s] [26.442808883s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-698346
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-698346
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 19:06:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-698346
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 19:07:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 19:07:26 +0000   Mon, 16 Sep 2024 19:06:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 19:07:26 +0000   Mon, 16 Sep 2024 19:06:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 19:07:26 +0000   Mon, 16 Sep 2024 19:06:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 19:07:26 +0000   Mon, 16 Sep 2024 19:06:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.23
	  Hostname:    kubernetes-upgrade-698346
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 16594959291c48af98de214dd7842e01
	  System UUID:                16594959-291c-48af-98de-214dd7842e01
	  Boot ID:                    b762f7e5-e5ad-4eca-8080-84b0ed56e20d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-txhq9                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     58s
	  kube-system                 coredns-7c65d6cfc9-zzpvb                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     58s
	  kube-system                 etcd-kubernetes-upgrade-698346                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         58s
	  kube-system                 kube-apiserver-kubernetes-upgrade-698346             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-698346    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-trcll                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-kubernetes-upgrade-698346             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node kubernetes-upgrade-698346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node kubernetes-upgrade-698346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s (x7 over 72s)  kubelet          Node kubernetes-upgrade-698346 status is now: NodeHasSufficientPID
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           62s                node-controller  Node kubernetes-upgrade-698346 event: Registered Node kubernetes-upgrade-698346 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-698346 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node kubernetes-upgrade-698346 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)    kubelet          Node kubernetes-upgrade-698346 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-698346 event: Registered Node kubernetes-upgrade-698346 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.146776] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.067487] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077833] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.167138] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.164075] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.315384] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +4.313463] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +0.064775] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.958318] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[ +14.815541] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.173439] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[Sep16 19:07] systemd-fstab-generator[2174]: Ignoring "noauto" option for root device
	[  +0.105595] kauditd_printk_skb: 99 callbacks suppressed
	[  +0.088974] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.210498] systemd-fstab-generator[2200]: Ignoring "noauto" option for root device
	[  +0.155236] systemd-fstab-generator[2212]: Ignoring "noauto" option for root device
	[  +0.326061] systemd-fstab-generator[2240]: Ignoring "noauto" option for root device
	[  +1.816315] systemd-fstab-generator[2414]: Ignoring "noauto" option for root device
	[  +2.903956] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.072626] kauditd_printk_skb: 24 callbacks suppressed
	[  +4.822890] systemd-fstab-generator[3220]: Ignoring "noauto" option for root device
	[  +0.262783] kauditd_printk_skb: 13 callbacks suppressed
	[  +4.540447] systemd-fstab-generator[3606]: Ignoring "noauto" option for root device
	[  +1.647968] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [6a523e3d7effaf770aad018267b7a8cc85ef889e6cc727a9d3ff1387ea2240bb] <==
	{"level":"info","ts":"2024-09-16T19:06:21.873549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.23:2379"}
	{"level":"info","ts":"2024-09-16T19:06:21.874149Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T19:06:21.874185Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T19:06:21.875197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"639be5bb85f82108","local-member-id":"6311727a8df181c7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:06:21.875297Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:06:21.875333Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:06:44.228153Z","caller":"traceutil/trace.go:171","msg":"trace[575526019] linearizableReadLoop","detail":"{readStateIndex:395; appliedIndex:394; }","duration":"121.731467ms","start":"2024-09-16T19:06:44.106337Z","end":"2024-09-16T19:06:44.228068Z","steps":["trace[575526019] 'read index received'  (duration: 60.168008ms)","trace[575526019] 'applied index is now lower than readState.Index'  (duration: 61.56294ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T19:06:44.228432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.954927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/coredns-7c65d6cfc9-txhq9.17f5cef7e9fdbd36\" ","response":"range_response_count:1 size:811"}
	{"level":"info","ts":"2024-09-16T19:06:44.228490Z","caller":"traceutil/trace.go:171","msg":"trace[117531471] range","detail":"{range_begin:/registry/events/kube-system/coredns-7c65d6cfc9-txhq9.17f5cef7e9fdbd36; range_end:; response_count:1; response_revision:385; }","duration":"122.147731ms","start":"2024-09-16T19:06:44.106332Z","end":"2024-09-16T19:06:44.228479Z","steps":["trace[117531471] 'agreement among raft nodes before linearized reading'  (duration: 121.902555ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T19:06:44.228674Z","caller":"traceutil/trace.go:171","msg":"trace[828713252] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"191.070738ms","start":"2024-09-16T19:06:44.037596Z","end":"2024-09-16T19:06:44.228667Z","steps":["trace[828713252] 'process raft request'  (duration: 128.947224ms)","trace[828713252] 'compare'  (duration: 61.365452ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T19:06:44.557872Z","caller":"traceutil/trace.go:171","msg":"trace[1294917338] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:395; }","duration":"313.691356ms","start":"2024-09-16T19:06:44.244161Z","end":"2024-09-16T19:06:44.557852Z","steps":["trace[1294917338] 'read index received'  (duration: 288.762657ms)","trace[1294917338] 'applied index is now lower than readState.Index'  (duration: 24.92732ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T19:06:44.557937Z","caller":"traceutil/trace.go:171","msg":"trace[1137350532] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"326.926719ms","start":"2024-09-16T19:06:44.230997Z","end":"2024-09-16T19:06:44.557923Z","steps":["trace[1137350532] 'process raft request'  (duration: 301.992564ms)","trace[1137350532] 'compare'  (duration: 24.668848ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T19:06:44.558044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.858555ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T19:06:44.558168Z","caller":"traceutil/trace.go:171","msg":"trace[1036017499] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:386; }","duration":"314.003351ms","start":"2024-09-16T19:06:44.244154Z","end":"2024-09-16T19:06:44.558157Z","steps":["trace[1036017499] 'agreement among raft nodes before linearized reading'  (duration: 313.803633ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T19:06:44.559297Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T19:06:44.230981Z","time spent":"327.08602ms","remote":"127.0.0.1:52806","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":796,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-7c65d6cfc9-txhq9.17f5cef7e9fdbd36\" mod_revision:379 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-txhq9.17f5cef7e9fdbd36\" value_size:708 lease:128231626903363958 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-txhq9.17f5cef7e9fdbd36\" > >"}
	{"level":"info","ts":"2024-09-16T19:07:01.939609Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T19:07:01.939657Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-698346","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.23:2380"],"advertise-client-urls":["https://192.168.50.23:2379"]}
	{"level":"warn","ts":"2024-09-16T19:07:01.939771Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T19:07:01.939859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T19:07:02.019516Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T19:07:02.019657Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.23:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T19:07:02.019787Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6311727a8df181c7","current-leader-member-id":"6311727a8df181c7"}
	{"level":"info","ts":"2024-09-16T19:07:02.022744Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.23:2380"}
	{"level":"info","ts":"2024-09-16T19:07:02.022979Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.23:2380"}
	{"level":"info","ts":"2024-09-16T19:07:02.023033Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-698346","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.23:2380"],"advertise-client-urls":["https://192.168.50.23:2379"]}
	
	
	==> etcd [eda8593eb39609311907072b49ef4b002f0fff3b3a0cfd2d38c75e10470eb7d3] <==
	{"level":"info","ts":"2024-09-16T19:07:13.231956Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"639be5bb85f82108","local-member-id":"6311727a8df181c7","added-peer-id":"6311727a8df181c7","added-peer-peer-urls":["https://192.168.50.23:2380"]}
	{"level":"info","ts":"2024-09-16T19:07:13.232163Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"639be5bb85f82108","local-member-id":"6311727a8df181c7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:07:13.232206Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T19:07:13.234892Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:07:13.238041Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.23:2380"}
	{"level":"info","ts":"2024-09-16T19:07:13.238078Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.23:2380"}
	{"level":"info","ts":"2024-09-16T19:07:13.237740Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T19:07:13.239311Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6311727a8df181c7","initial-advertise-peer-urls":["https://192.168.50.23:2380"],"listen-peer-urls":["https://192.168.50.23:2380"],"advertise-client-urls":["https://192.168.50.23:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.23:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T19:07:13.239384Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T19:07:14.710687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T19:07:14.710760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T19:07:14.710790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 received MsgPreVoteResp from 6311727a8df181c7 at term 2"}
	{"level":"info","ts":"2024-09-16T19:07:14.710817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T19:07:14.710825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 received MsgVoteResp from 6311727a8df181c7 at term 3"}
	{"level":"info","ts":"2024-09-16T19:07:14.710833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6311727a8df181c7 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T19:07:14.710840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6311727a8df181c7 elected leader 6311727a8df181c7 at term 3"}
	{"level":"info","ts":"2024-09-16T19:07:14.712986Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T19:07:14.713014Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T19:07:14.713029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T19:07:14.713815Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6311727a8df181c7","local-member-attributes":"{Name:kubernetes-upgrade-698346 ClientURLs:[https://192.168.50.23:2379]}","request-path":"/0/members/6311727a8df181c7/attributes","cluster-id":"639be5bb85f82108","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T19:07:14.713885Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T19:07:14.714005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:07:14.714601Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T19:07:14.714853Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.23:2379"}
	{"level":"info","ts":"2024-09-16T19:07:14.715557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:07:31 up 1 min,  0 users,  load average: 1.77, 0.64, 0.23
	Linux kubernetes-upgrade-698346 5.10.207 #1 SMP Mon Sep 16 15:00:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab] <==
	I0916 19:07:16.182616       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 19:07:16.182856       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0916 19:07:16.182888       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0916 19:07:16.183340       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0916 19:07:16.183373       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0916 19:07:16.183628       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0916 19:07:16.183949       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 19:07:16.202719       1 controller.go:157] Shutting down quota evaluator
	I0916 19:07:16.203888       1 controller.go:176] quota evaluator worker shutdown
	I0916 19:07:16.204421       1 controller.go:176] quota evaluator worker shutdown
	I0916 19:07:16.204529       1 controller.go:176] quota evaluator worker shutdown
	I0916 19:07:16.204558       1 controller.go:176] quota evaluator worker shutdown
	I0916 19:07:16.204655       1 controller.go:176] quota evaluator worker shutdown
	E0916 19:07:16.923626       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0916 19:07:16.927579       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0916 19:07:17.923238       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0916 19:07:17.927852       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0916 19:07:18.923520       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0916 19:07:18.927432       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0916 19:07:19.924069       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0916 19:07:19.928362       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0916 19:07:20.923505       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0916 19:07:20.928432       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0916 19:07:21.922977       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0916 19:07:21.928031       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-apiserver [bdca526cb6c3184ea59bb63dc346817a89684a8af5445e1b939bd337256a4f21] <==
	I0916 19:07:26.809402       1 policy_source.go:224] refreshing policies
	I0916 19:07:26.868314       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 19:07:26.868410       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 19:07:26.868646       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 19:07:26.869205       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 19:07:26.869311       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 19:07:26.872832       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 19:07:26.872954       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 19:07:26.873002       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 19:07:26.875557       1 aggregator.go:171] initial CRD sync complete...
	I0916 19:07:26.875593       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 19:07:26.875600       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 19:07:26.875605       1 cache.go:39] Caches are synced for autoregister controller
	I0916 19:07:26.875684       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 19:07:26.895029       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 19:07:26.903292       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 19:07:27.680361       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 19:07:28.090251       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.23]
	I0916 19:07:28.091480       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 19:07:28.100698       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 19:07:28.574747       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 19:07:28.586150       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 19:07:28.634567       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 19:07:28.756422       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 19:07:28.764154       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [d63753dad6e3a4be2981d3c744a6cb353ebc44b199c449fe213dac1a2fa7dff9] <==
	I0916 19:07:30.201967       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-698346"
	I0916 19:07:30.206392       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.666086ms"
	I0916 19:07:30.207295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="122.841µs"
	I0916 19:07:30.227148       1 shared_informer.go:320] Caches are synced for TTL
	I0916 19:07:30.241353       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0916 19:07:30.241504       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 19:07:30.242324       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-698346"
	I0916 19:07:30.261641       1 shared_informer.go:320] Caches are synced for GC
	I0916 19:07:30.292207       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 19:07:30.294043       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 19:07:30.324937       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 19:07:30.324861       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 19:07:30.341678       1 shared_informer.go:320] Caches are synced for taint
	I0916 19:07:30.341859       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0916 19:07:30.341948       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-698346"
	I0916 19:07:30.341997       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 19:07:30.370308       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 19:07:30.399292       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 19:07:30.845914       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 19:07:30.849290       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 19:07:30.849313       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 19:07:31.729471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.262396ms"
	I0916 19:07:31.729688       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="167.197µs"
	I0916 19:07:31.802981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="63.904873ms"
	I0916 19:07:31.803779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="107.224µs"
	
	
	==> kube-controller-manager [f4978f9e969ae29bb56f9a2ff6d377c1bcd2cd823bcf857cf0fdf3b42d89efab] <==
	
	
	==> kube-proxy [27801dacf3dbf6061f4a9719bd055ac3d27309b110c655b11de4b4d016345f06] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 19:06:35.159330       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 19:06:35.227561       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.23"]
	E0916 19:06:35.227931       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 19:06:35.386334       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 19:06:35.386403       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 19:06:35.386436       1 server_linux.go:169] "Using iptables Proxier"
	I0916 19:06:35.445294       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 19:06:35.464682       1 server.go:483] "Version info" version="v1.31.1"
	I0916 19:06:35.465319       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 19:06:35.474638       1 config.go:199] "Starting service config controller"
	I0916 19:06:35.480306       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 19:06:35.480461       1 config.go:105] "Starting endpoint slice config controller"
	I0916 19:06:35.480988       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 19:06:35.481880       1 config.go:328] "Starting node config controller"
	I0916 19:06:35.488316       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 19:06:35.581502       1 shared_informer.go:320] Caches are synced for service config
	I0916 19:06:35.581484       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 19:06:35.589175       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [efa021393702ec1f35d5a3963d97238f726f8ecf49bef60ae8773d9766e61007] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0916 19:07:28.057530       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0916 19:07:28.068797       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.23"]
	E0916 19:07:28.068951       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 19:07:28.113570       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0916 19:07:28.113664       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0916 19:07:28.113700       1 server_linux.go:169] "Using iptables Proxier"
	I0916 19:07:28.116278       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 19:07:28.116550       1 server.go:483] "Version info" version="v1.31.1"
	I0916 19:07:28.116706       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 19:07:28.118129       1 config.go:199] "Starting service config controller"
	I0916 19:07:28.118375       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 19:07:28.118458       1 config.go:105] "Starting endpoint slice config controller"
	I0916 19:07:28.118494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 19:07:28.119266       1 config.go:328] "Starting node config controller"
	I0916 19:07:28.119311       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 19:07:28.219260       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 19:07:28.219297       1 shared_informer.go:320] Caches are synced for service config
	I0916 19:07:28.219414       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b5aefdd252a8aefe84102101d3c14500df060ec77ab28ea1acf075f5a49e782d] <==
	E0916 19:06:23.237119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:23.236708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 19:06:23.237137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:23.236764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 19:06:23.237170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.076184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 19:06:24.076236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.260244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 19:06:24.260359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.274279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 19:06:24.274525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.294423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 19:06:24.294619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.312477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 19:06:24.312539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.404333       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 19:06:24.404859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.509023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 19:06:24.509076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.526870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 19:06:24.526996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 19:06:24.625913       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 19:06:24.626967       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 19:06:26.319256       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 19:07:01.940631       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff46d98ece866070dd298619a6c7b67d97e2de47ee69b25dd63394f918c6cf78] <==
	W0916 19:07:15.968278       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 19:07:15.968373       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 19:07:15.968408       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 19:07:15.968484       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 19:07:16.096854       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 19:07:16.097030       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 19:07:16.103539       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 19:07:16.106313       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 19:07:16.107020       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 19:07:16.107117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 19:07:16.207381       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 19:07:26.722601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0916 19:07:26.724372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0916 19:07:26.724574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0916 19:07:26.733044       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0916 19:07:26.735301       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0916 19:07:26.735361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0916 19:07:26.735404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0916 19:07:26.735451       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0916 19:07:26.735483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0916 19:07:26.735512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0916 19:07:26.735551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0916 19:07:26.735579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0916 19:07:26.777795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0916 19:07:26.798731       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	
	
	==> kubelet <==
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522244    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/903314c9353f124cadfe127e6b64c9d8-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-698346\" (UID: \"903314c9353f124cadfe127e6b64c9d8\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522261    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f9f05a78147d3b6fb3080173ffd2c1e-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-698346\" (UID: \"1f9f05a78147d3b6fb3080173ffd2c1e\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522395    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efb8622e140a89220cbd9418c01e6a29-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-698346\" (UID: \"efb8622e140a89220cbd9418c01e6a29\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522416    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efb8622e140a89220cbd9418c01e6a29-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-698346\" (UID: \"efb8622e140a89220cbd9418c01e6a29\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522446    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/903314c9353f124cadfe127e6b64c9d8-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-698346\" (UID: \"903314c9353f124cadfe127e6b64c9d8\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522470    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/903314c9353f124cadfe127e6b64c9d8-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-698346\" (UID: \"903314c9353f124cadfe127e6b64c9d8\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.522487    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/903314c9353f124cadfe127e6b64c9d8-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-698346\" (UID: \"903314c9353f124cadfe127e6b64c9d8\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: E0916 19:07:24.523655    3227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-698346?timeout=10s\": dial tcp 192.168.50.23:8443: connect: connection refused" interval="400ms"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.694758    3227 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: E0916 19:07:24.695839    3227 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.23:8443: connect: connection refused" node="kubernetes-upgrade-698346"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.775742    3227 scope.go:117] "RemoveContainer" containerID="2b66a5f9d75228c4dabfa59513b25910578a49e5f76918de38af59806aa2dcab"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:24.777731    3227 scope.go:117] "RemoveContainer" containerID="f4978f9e969ae29bb56f9a2ff6d377c1bcd2cd823bcf857cf0fdf3b42d89efab"
	Sep 16 19:07:24 kubernetes-upgrade-698346 kubelet[3227]: E0916 19:07:24.926318    3227 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-698346?timeout=10s\": dial tcp 192.168.50.23:8443: connect: connection refused" interval="800ms"
	Sep 16 19:07:25 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:25.098072    3227 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-698346"
	Sep 16 19:07:26 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:26.909438    3227 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-698346"
	Sep 16 19:07:26 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:26.909862    3227 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-698346"
	Sep 16 19:07:26 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:26.909941    3227 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 19:07:26 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:26.911204    3227 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:27.298952    3227 apiserver.go:52] "Watching apiserver"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:27.316514    3227 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:27.395594    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efbbef40-702c-4e00-a263-6934ed332ba8-xtables-lock\") pod \"kube-proxy-trcll\" (UID: \"efbbef40-702c-4e00-a263-6934ed332ba8\") " pod="kube-system/kube-proxy-trcll"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:27.395770    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efbbef40-702c-4e00-a263-6934ed332ba8-lib-modules\") pod \"kube-proxy-trcll\" (UID: \"efbbef40-702c-4e00-a263-6934ed332ba8\") " pod="kube-system/kube-proxy-trcll"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: I0916 19:07:27.395841    3227 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08ca6e48-0ce3-4585-8a2b-c777b02f0616-tmp\") pod \"storage-provisioner\" (UID: \"08ca6e48-0ce3-4585-8a2b-c777b02f0616\") " pod="kube-system/storage-provisioner"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: E0916 19:07:27.485716    3227 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-698346\" already exists" pod="kube-system/etcd-kubernetes-upgrade-698346"
	Sep 16 19:07:27 kubernetes-upgrade-698346 kubelet[3227]: E0916 19:07:27.487695    3227 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-698346\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-698346"
	
	
	==> storage-provisioner [a9520d45639da1e4aee3164f3083a708f95bb2dc8d9f957f980773239724e045] <==
	I0916 19:06:35.721635       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 19:06:35.741848       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 19:06:35.742029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 19:06:35.751309       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 19:06:35.751977       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-698346_aeed296f-cf3a-404e-94fc-c43a09929e32!
	I0916 19:06:35.752386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bb3511b3-3060-43e4-8bb5-6793549209cb", APIVersion:"v1", ResourceVersion:"375", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-698346_aeed296f-cf3a-404e-94fc-c43a09929e32 became leader
	I0916 19:06:35.852541       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-698346_aeed296f-cf3a-404e-94fc-c43a09929e32!
	
	
	==> storage-provisioner [e03dd04563a12957ce60b0861b4bd5ddcfb72d0c22d95a67ae1ad07ffb08fbf2] <==
	I0916 19:07:27.955408       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 19:07:27.971364       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 19:07:27.971428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 19:07:30.452628  425433 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19649-371203/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-698346 -n kubernetes-upgrade-698346
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-698346 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-698346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-698346
--- FAIL: TestKubernetesUpgrade (416.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.059s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0916 19:23:56.983682  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (20m12s)
		TestStartStop (20m4s)
		TestStartStop/group/default-k8s-diff-port (15m17s)
		TestStartStop/group/default-k8s-diff-port/serial (15m17s)
		TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (2m6s)
		TestStartStop/group/embed-certs (15m41s)
		TestStartStop/group/embed-certs/serial (15m41s)
		TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (1m46s)
		TestStartStop/group/no-preload (16m46s)
		TestStartStop/group/no-preload/serial (16m46s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (51s)
		TestStartStop/group/old-k8s-version (18m8s)
		TestStartStop/group/old-k8s-version/serial (18m8s)
		TestStartStop/group/old-k8s-version/serial/SecondStart (11m23s)

                                                
                                                
goroutine 2903 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 15 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0007ad520, 0xc0007f3bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc000898498, {0x4cf96a0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x4db7de0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000645040)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000645040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0007a2100)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2599 [select, 13 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e2b200, 0xc0020b4070)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2596
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 476 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001905200, 0xc001b10460)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 342
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 67 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0xff
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 66
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x167

                                                
                                                
goroutine 2226 [chan receive, 15 minutes]:
testing.(*T).Run(0xc0007ad040, {0x2927010?, 0x0?}, 0xc001552180)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0007ad040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0007ad040, 0xc001a2bb40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2204
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1617 [chan receive, 22 minutes]:
testing.(*T).Run(0xc00152a000, {0x2925a4b?, 0x55b79c?}, 0xc00082ae58)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00152a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc00152a000, 0x3410cb8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2623 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f3f0, 0xc0004fdf10}, {0x3782680, 0xc000199bc0}, 0x1, 0x0, 0xc000ae7c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f3f0?, 0xc000488000?}, 0x3b9aca00, 0xc000ae7e10?, 0x1, 0xc000ae7c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f3f0, 0xc000488000}, 0xc0019d2820, {0xc001598000, 0x11}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f3f0, 0xc000488000}, 0xc0019d2820, {0xc001598000, 0x11}, {0x2930d97?, 0xc001336760?}, {0x55b653?, 0x4b1aaf?}, {0xc000654700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0019d2820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0019d2820, 0xc000804580)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2325
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2374 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0007ada00, {0x2951f4f?, 0xc001338d70?}, 0xc0007a2600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0007ada00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0007ada00, 0xc000804880)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2207
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 211 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7f7f41a180a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0003fc000?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0003fc000)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0003fc000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0008d8a40)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008d8a40)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc00022f680, {0x3781ff0, 0xc0008d8a40})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc00022f680)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00083a680?, 0xc00083a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 192
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2747 [IO wait]:
internal/poll.runtime_pollWait(0x7f7f305d7208, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000805900?, 0xc001908800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000805900, {0xc001908800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000805900, {0xc001908800?, 0x10?, 0xc001a3b8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0019da640, {0xc001908800?, 0xc00190885f?, 0x6f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001d23a70, {0xc001908800?, 0x0?, 0xc001d23a70?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0019222b8, {0x37688e0, 0xc001d23a70})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001922008, {0x7f7f305d7310, 0xc001d22630}, 0xc001a3ba10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001922008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001922008, {0xc000b4e000, 0x1000, 0xc001445880?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc000adf140, {0xc0013e23c0, 0x9, 0x4cb3c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766b80, 0xc000adf140}, {0xc0013e23c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0013e23c0, 0x9, 0x47b965?}, {0x3766b80?, 0xc000adf140?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0013e2380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001a3bfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001905b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2746
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2387 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0006fcbd0, 0x3)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0007f5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aad40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006fcd00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000af3790, {0x37680e0, 0xc001be2810}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000af3790, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2268
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2204 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000039ba0, 0x3410ef8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1695
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2592 [IO wait]:
internal/poll.runtime_pollWait(0x7f7f41a182b8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000805f00?, 0xc00096f800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000805f00, {0xc00096f800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000805f00, {0xc00096f800?, 0x9f65f2?, 0xc00151e9a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000c2c138, {0xc00096f800?, 0xc000c5e100?, 0xc00096f85f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0007b6b40, {0xc00096f800?, 0x0?, 0xc0007b6b40?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc001922d38, {0x37688e0, 0xc0007b6b40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001922a88, {0x3767ba0, 0xc000c2c138}, 0xc00151ea10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001922a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001922a88, {0xc000c75000, 0x1000, 0xc0014aca80?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001e1c660, {0xc0019803c0, 0x9, 0x4cb3c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766b80, 0xc001e1c660}, {0xc0019803c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0019803c0, 0x9, 0x47b965?}, {0x3766b80?, 0xc001e1c660?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc001980380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00151efa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000290f00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2591
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2597 [IO wait]:
internal/poll.runtime_pollWait(0x7f7f41a17e98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000adf860?, 0xc00202ab73?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000adf860, {0xc00202ab73, 0x48d, 0x48d})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000612830, {0xc00202ab73?, 0x411b30?, 0x20a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00191b2c0, {0x3766960, 0xc0019da708})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3766ae0, 0xc00191b2c0}, {0x3766960, 0xc0019da708}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000612830?, {0x3766ae0, 0xc00191b2c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000612830, {0x3766ae0, 0xc00191b2c0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3766ae0, 0xc00191b2c0}, {0x37669e0, 0xc000612830}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001b11f10?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2596
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 1695 [chan receive, 20 minutes]:
testing.(*T).Run(0xc00152a680, {0x2925a4b?, 0x55b653?}, 0x3410ef8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00152a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00152a680, 0x3410d00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2268 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006fcd00, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2408
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 1944 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000039380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000039380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000039380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000039380, 0xc000804680)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2206 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000fc4e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000fc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000fc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0000fc4e0, 0xc001a2ba00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2204
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 391 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006fdc40, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 317
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2598 [IO wait]:
internal/poll.runtime_pollWait(0x7f7f41a17b80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000adf920?, 0xc001d9537c?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000adf920, {0xc001d9537c, 0x14c84, 0x14c84})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000612848, {0xc001d9537c?, 0x4919e0?, 0x3feb7?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00191b2f0, {0x3766960, 0xc001b7e160})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3766ae0, 0xc00191b2f0}, {0x3766960, 0xc001b7e160}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000612848?, {0x3766ae0, 0xc00191b2f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000612848, {0x3766ae0, 0xc00191b2f0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3766ae0, 0xc00191b2f0}, {0x37669e0, 0xc000612848}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0007a2600?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2596
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 694 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc001750900)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 692
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 2385 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3785c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1857 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0019d2b60, 0xc00082ae58)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1617
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2388 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f600, 0xc000064310}, 0xc000096750, 0xc001a3ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f600, 0xc000064310}, 0xe0?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f600?, 0xc000064310?}, 0xc00152b6c0?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0x5a1aa4?, 0xc0018e1ce0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2268
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 418 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000291500, 0xc000065b20)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 321
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 375 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0006fdc10, 0x23)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013d5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aad40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006fdc40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00066b610, {0x37680e0, 0xc000ad33b0}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00066b610, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 391
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 390 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3785c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 317
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 376 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f600, 0xc000064310}, 0xc001335f50, 0xc00029bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f600, 0xc000064310}, 0x6?, 0xc001335f50, 0xc001335f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f600?, 0xc000064310?}, 0xc0000fcd00?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001335fd0?, 0x5a1aa4?, 0xc000536ae0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 391
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 377 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 376
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 695 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc001750900)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 692
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 546 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000290480, 0xc0013fe700)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 449
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2389 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2388
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2325 [chan receive]:
testing.(*T).Run(0xc00083a820, {0x2951f4f?, 0xc00050ad70?}, 0xc000804580)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00083a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00083a820, 0xc00169c080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2208
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2267 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3785c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2408
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2300 [chan receive, 13 minutes]:
testing.(*T).Run(0xc00083a000, {0x293300c?, 0xc000c6a570?}, 0xc0007a2600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00083a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00083a000, 0xc001552000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2205
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2466 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a2ab00, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2207 [chan receive, 15 minutes]:
testing.(*T).Run(0xc0000fcd00, {0x2927010?, 0x0?}, 0xc000804880)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000fcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0000fcd00, 0xc001a2ba40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2204
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1943 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0000391e0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0000391e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0000391e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0000391e0, 0xc000804600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2613 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f3f0, 0xc00044a4d0}, {0x3782680, 0xc000822920}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f3f0?, 0xc000488620?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f3f0, 0xc000488620}, 0xc00187c1a0, {0xc000501120, 0x1c}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f3f0, 0xc000488620}, 0xc00187c1a0, {0xc000501120, 0x1c}, {0x294f08e?, 0xc00193d760?}, {0x55b653?, 0x4b1aaf?}, {0xc0014ec000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00187c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00187c1a0, 0xc0007a2600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2374
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2631 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x378f3f0, 0xc0007a4c40}, {0x3782680, 0xc000c257c0}, 0x1, 0x0, 0xc000aebc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x378f3f0?, 0xc000498540?}, 0x3b9aca00, 0xc000b47e10?, 0x1, 0xc000b47c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x378f3f0, 0xc000498540}, 0xc0019d2340, {0xc0019f22d0, 0x12}, {0x294c130, 0x14}, {0x29640b7, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x378f3f0, 0xc000498540}, 0xc0019d2340, {0xc0019f22d0, 0x12}, {0x2932ff6?, 0xc001339f60?}, {0x55b653?, 0x4b1aaf?}, {0xc000654800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0019d2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0019d2340, 0xc000804e00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2352
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2421 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2420
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2352 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0019d21a0, {0x2951f4f?, 0xc00193bd70?}, 0xc000804e00)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0019d21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0019d21a0, 0xc001552180)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2226
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2596 [syscall, 13 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x16, 0xc00133bb30, 0x4, 0xc000b4dcb0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc00169e900?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001e2b200)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001e2b200)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00187c4e0, 0xc001e2b200)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x378f3f0, 0xc00055c5b0}, 0xc00187c4e0, {0xc0019f24f8, 0x16}, {0x0?, 0xc000507760?}, {0x55b653?, 0x4b1aaf?}, {0xc001e2a780, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00187c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00187c4e0, 0xc0007a2600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2300
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2420 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x378f600, 0xc000064310}, 0xc00193ff50, 0xc001a38f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x378f600, 0xc000064310}, 0x50?, 0xc00193ff50, 0xc00193ff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x378f600?, 0xc000064310?}, 0xc0007ad860?, 0x55bf60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00193ffd0?, 0x5a1aa4?, 0xc001a2a1c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2466
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2208 [chan receive, 16 minutes]:
testing.(*T).Run(0xc0000fdd40, {0x2927010?, 0x0?}, 0xc00169c080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000fdd40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0000fdd40, 0xc001a2ba80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2204
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1890 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019d2d00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019d2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019d2d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019d2d00, 0xc0007a2800)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1945 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000039520)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000039520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000039520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000039520, 0xc000804700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1946 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000039860)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000039860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000039860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000039860, 0xc000804780)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2140 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00152a9c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152a9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00152a9c0, 0xc001552b00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2582 [IO wait]:
internal/poll.runtime_pollWait(0x7f7f41a17c88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0007a3780?, 0xc00096f000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0007a3780, {0xc00096f000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0007a3780, {0xc00096f000?, 0x10?, 0xc00029e8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000c2c030, {0xc00096f000?, 0xc00096f05f?, 0x6f?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0007b6eb8, {0xc00096f000?, 0x0?, 0xc0007b6eb8?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0019229b8, {0x37688e0, 0xc0007b6eb8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001922708, {0x7f7f305d7310, 0xc001dc6078}, 0xc00029ea10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001922708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001922708, {0xc0006b5000, 0x1000, 0xc0014aca80?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001456fc0, {0xc001980200, 0x9, 0x4cb3c70?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3766b80, 0xc001456fc0}, {0xc001980200, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001980200, 0x9, 0x47b965?}, {0x3766b80?, 0xc001456fc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0019801c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00029efa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000290000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2581
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2141 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc00095aa00)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc00152b040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00152b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00152b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00152b040, 0xc001552b80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2205 [chan receive, 18 minutes]:
testing.(*T).Run(0xc000039d40, {0x2927010?, 0x0?}, 0xc001552000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000039d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000039d40, 0xc001a2b9c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2204
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2419 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001a2aad0, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000aefd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37aad40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a2ab00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000615740, {0x37680e0, 0xc000c38d50}, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000615740, 0x3b9aca00, 0x0, 0x1, 0xc000064310)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2466
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                    

Test pass (171/213)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 12.97
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
22 TestOffline 82.78
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
28 TestCertOptions 75.99
29 TestCertExpiration 286.82
31 TestForceSystemdFlag 90.39
32 TestForceSystemdEnv 55.7
34 TestKVMDriverInstallOrUpdate 4.56
38 TestErrorSpam/setup 43.56
39 TestErrorSpam/start 0.35
40 TestErrorSpam/status 0.76
41 TestErrorSpam/pause 1.61
42 TestErrorSpam/unpause 1.8
43 TestErrorSpam/stop 5.2
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 88.16
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 35.18
50 TestFunctional/serial/KubeContext 0.05
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.79
55 TestFunctional/serial/CacheCmd/cache/add_local 2.26
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.11
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
63 TestFunctional/serial/ExtraConfig 43.02
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.49
66 TestFunctional/serial/LogsFileCmd 1.52
67 TestFunctional/serial/InvalidService 4.82
69 TestFunctional/parallel/ConfigCmd 0.36
70 TestFunctional/parallel/DashboardCmd 19.64
71 TestFunctional/parallel/DryRun 0.31
72 TestFunctional/parallel/InternationalLanguage 0.16
73 TestFunctional/parallel/StatusCmd 1.18
77 TestFunctional/parallel/ServiceCmdConnect 9.09
78 TestFunctional/parallel/AddonsCmd 0.15
79 TestFunctional/parallel/PersistentVolumeClaim 43.91
81 TestFunctional/parallel/SSHCmd 0.45
82 TestFunctional/parallel/CpCmd 1.47
83 TestFunctional/parallel/MySQL 25.86
84 TestFunctional/parallel/FileSync 0.23
85 TestFunctional/parallel/CertSync 1.46
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
93 TestFunctional/parallel/License 1.12
94 TestFunctional/parallel/ServiceCmd/DeployApp 12.24
95 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
96 TestFunctional/parallel/MountCmd/any-port 11.09
97 TestFunctional/parallel/ProfileCmd/profile_list 0.36
98 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
99 TestFunctional/parallel/Version/short 0.05
100 TestFunctional/parallel/Version/components 0.75
101 TestFunctional/parallel/MountCmd/specific-port 2
102 TestFunctional/parallel/ServiceCmd/List 0.43
103 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
104 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
105 TestFunctional/parallel/ServiceCmd/Format 0.32
106 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
107 TestFunctional/parallel/ServiceCmd/URL 0.37
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.43
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.58
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.6
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
121 TestFunctional/parallel/ImageCommands/ImageBuild 11.25
122 TestFunctional/parallel/ImageCommands/Setup 1.98
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.1
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.18
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.49
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.67
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.9
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
133 TestFunctional/delete_echo-server_images 0.03
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
139 TestMultiControlPlane/serial/StartCluster 200.52
140 TestMultiControlPlane/serial/DeployApp 6.69
141 TestMultiControlPlane/serial/PingHostFromPods 1.26
142 TestMultiControlPlane/serial/AddWorkerNode 58.64
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
145 TestMultiControlPlane/serial/CopyFile 13.09
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
151 TestMultiControlPlane/serial/DeleteSecondaryNode 16.79
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
154 TestMultiControlPlane/serial/RestartCluster 357.4
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
156 TestMultiControlPlane/serial/AddSecondaryNode 78
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
161 TestJSONOutput/start/Command 80.12
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.73
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.62
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.37
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.21
189 TestMainNoArgs 0.05
190 TestMinikubeProfile 91.7
193 TestMountStart/serial/StartWithMountFirst 29.17
194 TestMountStart/serial/VerifyMountFirst 0.39
195 TestMountStart/serial/StartWithMountSecond 31.87
196 TestMountStart/serial/VerifyMountSecond 0.38
197 TestMountStart/serial/DeleteFirst 0.69
198 TestMountStart/serial/VerifyMountPostDelete 0.39
199 TestMountStart/serial/Stop 1.28
200 TestMountStart/serial/RestartStopped 23.91
201 TestMountStart/serial/VerifyMountPostStop 0.37
204 TestMultiNode/serial/FreshStart2Nodes 115.51
205 TestMultiNode/serial/DeployApp2Nodes 5.25
206 TestMultiNode/serial/PingHostFrom2Pods 0.83
207 TestMultiNode/serial/AddNode 53.97
208 TestMultiNode/serial/MultiNodeLabels 0.07
209 TestMultiNode/serial/ProfileList 0.22
210 TestMultiNode/serial/CopyFile 7.29
211 TestMultiNode/serial/StopNode 2.31
212 TestMultiNode/serial/StartAfterStop 40.31
214 TestMultiNode/serial/DeleteNode 2.01
216 TestMultiNode/serial/RestartMultiNode 200.73
217 TestMultiNode/serial/ValidateNameConflict 46.31
224 TestScheduledStopUnix 114.98
228 TestRunningBinaryUpgrade 201.9
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
234 TestNoKubernetes/serial/StartWithK8s 96.08
235 TestStoppedBinaryUpgrade/Setup 2.43
236 TestStoppedBinaryUpgrade/Upgrade 122.87
237 TestNoKubernetes/serial/StartWithStopK8s 42.78
238 TestNoKubernetes/serial/Start 28.76
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
240 TestNoKubernetes/serial/ProfileList 15.45
241 TestNoKubernetes/serial/Stop 1.4
242 TestNoKubernetes/serial/StartNoArgs 27.3
243 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
247 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
264 TestPause/serial/Start 106.99
269 TestPause/serial/SecondStartNoReconfiguration 58.81
270 TestPause/serial/Pause 0.81
271 TestPause/serial/VerifyStatus 0.26
272 TestPause/serial/Unpause 0.77
273 TestPause/serial/PauseAgain 0.99
274 TestPause/serial/DeletePaused 1.16
275 TestPause/serial/VerifyDeletedResources 0.57
x
+
TestDownloadOnly/v1.20.0/json-events (24.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-261847 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-261847 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (24.929879281s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-261847
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-261847: exit status 85 (65.614522ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-261847 | jenkins | v1.34.0 | 16 Sep 24 17:24 UTC |          |
	|         | -p download-only-261847        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:24:19
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:24:19.371011  378475 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:24:19.371141  378475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:24:19.371152  378475 out.go:358] Setting ErrFile to fd 2...
	I0916 17:24:19.371156  378475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:24:19.371379  378475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	W0916 17:24:19.371528  378475 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19649-371203/.minikube/config/config.json: open /home/jenkins/minikube-integration/19649-371203/.minikube/config/config.json: no such file or directory
	I0916 17:24:19.372291  378475 out.go:352] Setting JSON to true
	I0916 17:24:19.373289  378475 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4002,"bootTime":1726503457,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:24:19.373423  378475 start.go:139] virtualization: kvm guest
	I0916 17:24:19.376219  378475 out.go:97] [download-only-261847] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:24:19.376422  378475 notify.go:220] Checking for updates...
	W0916 17:24:19.376415  378475 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 17:24:19.378246  378475 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:24:19.379965  378475 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:24:19.381625  378475 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 17:24:19.382991  378475 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 17:24:19.384637  378475 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:24:19.387451  378475 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:24:19.387800  378475 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:24:19.423395  378475 out.go:97] Using the kvm2 driver based on user configuration
	I0916 17:24:19.423441  378475 start.go:297] selected driver: kvm2
	I0916 17:24:19.423450  378475 start.go:901] validating driver "kvm2" against <nil>
	I0916 17:24:19.423933  378475 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:24:19.424044  378475 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 17:24:19.440410  378475 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 17:24:19.440498  378475 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:24:19.441153  378475 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 17:24:19.441355  378475 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:24:19.441405  378475 cni.go:84] Creating CNI manager for ""
	I0916 17:24:19.441468  378475 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 17:24:19.441479  378475 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:24:19.441563  378475 start.go:340] cluster config:
	{Name:download-only-261847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-261847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:24:19.441809  378475 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:24:19.444216  378475 out.go:97] Downloading VM boot image ...
	I0916 17:24:19.444277  378475 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/iso/amd64/minikube-v1.34.0-1726481713-19649-amd64.iso
	I0916 17:24:29.935901  378475 out.go:97] Starting "download-only-261847" primary control-plane node in "download-only-261847" cluster
	I0916 17:24:29.935947  378475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 17:24:30.045555  378475 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0916 17:24:30.045595  378475 cache.go:56] Caching tarball of preloaded images
	I0916 17:24:30.045780  378475 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0916 17:24:30.048071  378475 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 17:24:30.048107  378475 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0916 17:24:30.153763  378475 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-261847 host does not exist
	  To start a cluster, run: "minikube start -p download-only-261847"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-261847
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (12.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-803428 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-803428 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.966201066s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (12.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-803428
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-803428: exit status 85 (63.374234ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-261847 | jenkins | v1.34.0 | 16 Sep 24 17:24 UTC |                     |
	|         | -p download-only-261847        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 17:24 UTC | 16 Sep 24 17:24 UTC |
	| delete  | -p download-only-261847        | download-only-261847 | jenkins | v1.34.0 | 16 Sep 24 17:24 UTC | 16 Sep 24 17:24 UTC |
	| start   | -o=json --download-only        | download-only-803428 | jenkins | v1.34.0 | 16 Sep 24 17:24 UTC |                     |
	|         | -p download-only-803428        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 17:24:44
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 17:24:44.637428  378724 out.go:345] Setting OutFile to fd 1 ...
	I0916 17:24:44.637574  378724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:24:44.637600  378724 out.go:358] Setting ErrFile to fd 2...
	I0916 17:24:44.637609  378724 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 17:24:44.637808  378724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 17:24:44.638393  378724 out.go:352] Setting JSON to true
	I0916 17:24:44.639412  378724 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":4028,"bootTime":1726503457,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 17:24:44.639510  378724 start.go:139] virtualization: kvm guest
	I0916 17:24:44.641859  378724 out.go:97] [download-only-803428] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 17:24:44.642036  378724 notify.go:220] Checking for updates...
	I0916 17:24:44.643959  378724 out.go:169] MINIKUBE_LOCATION=19649
	I0916 17:24:44.645604  378724 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 17:24:44.647092  378724 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 17:24:44.648550  378724 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 17:24:44.650072  378724 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 17:24:44.652843  378724 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 17:24:44.653087  378724 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 17:24:44.685611  378724 out.go:97] Using the kvm2 driver based on user configuration
	I0916 17:24:44.685646  378724 start.go:297] selected driver: kvm2
	I0916 17:24:44.685662  378724 start.go:901] validating driver "kvm2" against <nil>
	I0916 17:24:44.686085  378724 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:24:44.686188  378724 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19649-371203/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0916 17:24:44.702371  378724 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0916 17:24:44.702427  378724 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 17:24:44.703002  378724 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0916 17:24:44.703149  378724 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 17:24:44.703185  378724 cni.go:84] Creating CNI manager for ""
	I0916 17:24:44.703234  378724 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0916 17:24:44.703244  378724 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0916 17:24:44.703322  378724 start.go:340] cluster config:
	{Name:download-only-803428 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-803428 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 17:24:44.703447  378724 iso.go:125] acquiring lock: {Name:mk3f485882099a6d11d3099c0ad8ff2762f11ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 17:24:44.705742  378724 out.go:97] Starting "download-only-803428" primary control-plane node in "download-only-803428" cluster
	I0916 17:24:44.705781  378724 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 17:24:45.253396  378724 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0916 17:24:45.253436  378724 cache.go:56] Caching tarball of preloaded images
	I0916 17:24:45.253620  378724 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0916 17:24:45.255580  378724 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 17:24:45.255614  378724 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0916 17:24:45.364041  378724 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19649-371203/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-803428 host does not exist
	  To start a cluster, run: "minikube start -p download-only-803428"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-803428
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-310469 --alsologtostderr --binary-mirror http://127.0.0.1:32813 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-310469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-310469
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (82.78s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-657607 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-657607 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.504337913s)
helpers_test.go:175: Cleaning up "offline-crio-657607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-657607
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-657607: (1.278499308s)
--- PASS: TestOffline (82.78s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-529439
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-529439: exit status 85 (59.207555ms)

                                                
                                                
-- stdout --
	* Profile "addons-529439" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-529439"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-529439
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-529439: exit status 85 (58.030872ms)

                                                
                                                
-- stdout --
	* Profile "addons-529439" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-529439"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestCertOptions (75.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-196343 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-196343 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.509074779s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-196343 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-196343 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-196343 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-196343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-196343
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-196343: (1.017557711s)
--- PASS: TestCertOptions (75.99s)

                                                
                                    
x
+
TestCertExpiration (286.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-729778 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-729778 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m5.521381774s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-729778 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-729778 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.295456318s)
helpers_test.go:175: Cleaning up "cert-expiration-729778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-729778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-729778: (1.005830975s)
--- PASS: TestCertExpiration (286.82s)

                                                
                                    
x
+
TestForceSystemdFlag (90.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-669400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-669400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m29.190962606s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-669400 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-669400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-669400
--- PASS: TestForceSystemdFlag (90.39s)

                                                
                                    
x
+
TestForceSystemdEnv (55.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-886101 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-886101 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.701971024s)
helpers_test.go:175: Cleaning up "force-systemd-env-886101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-886101
--- PASS: TestForceSystemdEnv (55.70s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                    
x
+
TestErrorSpam/setup (43.56s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-408239 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-408239 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-408239 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-408239 --driver=kvm2  --container-runtime=crio: (43.56117021s)
--- PASS: TestErrorSpam/setup (43.56s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (5.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 stop: (2.313889746s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 stop: (1.52042028s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-408239 --log_dir /tmp/nospam-408239 stop: (1.366742444s)
--- PASS: TestErrorSpam/stop (5.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19649-371203/.minikube/files/etc/test/nested/copy/378463/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-472457 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-472457 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m28.160854496s)
--- PASS: TestFunctional/serial/StartWithProxy (88.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-472457 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-472457 --alsologtostderr -v=8: (35.17634218s)
functional_test.go:663: soft start took 35.177240557s for "functional-472457" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-472457 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 cache add registry.k8s.io/pause:3.1: (1.238713929s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 cache add registry.k8s.io/pause:3.3: (1.348791957s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 cache add registry.k8s.io/pause:latest: (1.205386897s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-472457 /tmp/TestFunctionalserialCacheCmdcacheadd_local3301105662/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cache add minikube-local-cache-test:functional-472457
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 cache add minikube-local-cache-test:functional-472457: (1.928434831s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cache delete minikube-local-cache-test:functional-472457
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-472457
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.367338ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 cache reload: (1.032851444s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 kubectl -- --context functional-472457 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-472457 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-472457 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-472457 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.017385817s)
functional_test.go:761: restart took 43.017499166s for "functional-472457" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-472457 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 logs: (1.485176331s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 logs --file /tmp/TestFunctionalserialLogsFileCmd239722972/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 logs --file /tmp/TestFunctionalserialLogsFileCmd239722972/001/logs.txt: (1.52349564s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.82s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-472457 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-472457
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-472457: exit status 115 (284.971348ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.184:31355 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-472457 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-472457 delete -f testdata/invalidsvc.yaml: (1.334031243s)
--- PASS: TestFunctional/serial/InvalidService (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 config get cpus: exit status 14 (60.679886ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 config get cpus: exit status 14 (48.633368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-472457 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-472457 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 390751: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-472457 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-472457 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.275505ms)

                                                
                                                
-- stdout --
	* [functional-472457] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:08:58.421188  390497 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:08:58.421349  390497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:08:58.421361  390497 out.go:358] Setting ErrFile to fd 2...
	I0916 18:08:58.421452  390497 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:08:58.421994  390497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:08:58.422874  390497 out.go:352] Setting JSON to false
	I0916 18:08:58.424057  390497 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6681,"bootTime":1726503457,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:08:58.424162  390497 start.go:139] virtualization: kvm guest
	I0916 18:08:58.426086  390497 out.go:177] * [functional-472457] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 18:08:58.428071  390497 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:08:58.428113  390497 notify.go:220] Checking for updates...
	I0916 18:08:58.431014  390497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:08:58.432530  390497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:08:58.433863  390497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:08:58.435409  390497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:08:58.436940  390497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:08:58.439091  390497 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:08:58.439480  390497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:08:58.439533  390497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:08:58.456483  390497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0916 18:08:58.457150  390497 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:08:58.457906  390497 main.go:141] libmachine: Using API Version  1
	I0916 18:08:58.457932  390497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:08:58.458282  390497 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:08:58.458483  390497 main.go:141] libmachine: (functional-472457) Calling .DriverName
	I0916 18:08:58.458751  390497 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:08:58.459182  390497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:08:58.459232  390497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:08:58.475255  390497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0916 18:08:58.475656  390497 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:08:58.476303  390497 main.go:141] libmachine: Using API Version  1
	I0916 18:08:58.476326  390497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:08:58.476780  390497 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:08:58.477041  390497 main.go:141] libmachine: (functional-472457) Calling .DriverName
	I0916 18:08:58.513083  390497 out.go:177] * Using the kvm2 driver based on existing profile
	I0916 18:08:58.514557  390497 start.go:297] selected driver: kvm2
	I0916 18:08:58.514572  390497 start.go:901] validating driver "kvm2" against &{Name:functional-472457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-472457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:08:58.514725  390497 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:08:58.517348  390497 out.go:201] 
	W0916 18:08:58.518857  390497 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 18:08:58.520197  390497 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-472457 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-472457 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-472457 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.613386ms)

                                                
                                                
-- stdout --
	* [functional-472457] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:08:58.264284  390446 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:08:58.264417  390446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:08:58.264427  390446 out.go:358] Setting ErrFile to fd 2...
	I0916 18:08:58.264431  390446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:08:58.264735  390446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:08:58.265297  390446 out.go:352] Setting JSON to false
	I0916 18:08:58.266402  390446 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":6681,"bootTime":1726503457,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 18:08:58.266473  390446 start.go:139] virtualization: kvm guest
	I0916 18:08:58.269192  390446 out.go:177] * [functional-472457] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 18:08:58.270769  390446 out.go:177]   - MINIKUBE_LOCATION=19649
	I0916 18:08:58.270772  390446 notify.go:220] Checking for updates...
	I0916 18:08:58.273956  390446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 18:08:58.275431  390446 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	I0916 18:08:58.277100  390446 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	I0916 18:08:58.278636  390446 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 18:08:58.280043  390446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 18:08:58.281783  390446 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:08:58.282224  390446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:08:58.282279  390446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:08:58.301839  390446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
	I0916 18:08:58.302274  390446 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:08:58.302855  390446 main.go:141] libmachine: Using API Version  1
	I0916 18:08:58.302883  390446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:08:58.303273  390446 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:08:58.303462  390446 main.go:141] libmachine: (functional-472457) Calling .DriverName
	I0916 18:08:58.303796  390446 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 18:08:58.304252  390446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:08:58.304304  390446 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:08:58.320154  390446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0916 18:08:58.320744  390446 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:08:58.321397  390446 main.go:141] libmachine: Using API Version  1
	I0916 18:08:58.321422  390446 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:08:58.321755  390446 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:08:58.322019  390446 main.go:141] libmachine: (functional-472457) Calling .DriverName
	I0916 18:08:58.357607  390446 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0916 18:08:58.358930  390446 start.go:297] selected driver: kvm2
	I0916 18:08:58.358948  390446 start.go:901] validating driver "kvm2" against &{Name:functional-472457 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19649/minikube-v1.34.0-1726481713-19649-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726481311-19649@sha256:b5dfdcf7ad8b49233db09f1c58aaf52f6522fde64cf16c939b3fc45365d24cdc Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-472457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.184 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 18:08:58.359090  390446 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 18:08:58.361974  390446 out.go:201] 
	W0916 18:08:58.363532  390446 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 18:08:58.365204  390446 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-472457 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-472457 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qbrh7" [c1876f7a-d7bf-4b5a-8d04-44ce852322e4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-qbrh7" [c1876f7a-d7bf-4b5a-8d04-44ce852322e4] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004600394s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.184:32361
functional_test.go:1675: http://192.168.39.184:32361: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-qbrh7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.184:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.184:32361
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (43.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [832165f4-1105-47c8-a331-426280e80619] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004873746s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-472457 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-472457 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-472457 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-472457 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-472457 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4f3036e3-5e40-4fa2-b772-0c57fd50761f] Pending
helpers_test.go:344: "sp-pod" [4f3036e3-5e40-4fa2-b772-0c57fd50761f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4f3036e3-5e40-4fa2-b772-0c57fd50761f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004638383s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-472457 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-472457 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-472457 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1e498599-9cd4-403b-8eba-1e28aa01a533] Pending
helpers_test.go:344: "sp-pod" [1e498599-9cd4-403b-8eba-1e28aa01a533] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1e498599-9cd4-403b-8eba-1e28aa01a533] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004443046s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-472457 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (43.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh -n functional-472457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cp functional-472457:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd248907431/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh -n functional-472457 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh -n functional-472457 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-472457 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-cp959" [7a0b414f-7cad-4429-a8b7-5f4f0ee87e7b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-cp959" [7a0b414f-7cad-4429-a8b7-5f4f0ee87e7b] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004489019s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-472457 exec mysql-6cdb49bbb-cp959 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-472457 exec mysql-6cdb49bbb-cp959 -- mysql -ppassword -e "show databases;": exit status 1 (140.960296ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-472457 exec mysql-6cdb49bbb-cp959 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.86s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/378463/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /etc/test/nested/copy/378463/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/378463.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /etc/ssl/certs/378463.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/378463.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /usr/share/ca-certificates/378463.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3784632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /etc/ssl/certs/3784632.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3784632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /usr/share/ca-certificates/3784632.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-472457 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh "sudo systemctl is-active docker": exit status 1 (272.691628ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh "sudo systemctl is-active containerd": exit status 1 (299.190069ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.114874377s)
--- PASS: TestFunctional/parallel/License (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-472457 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-472457 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-7h5nx" [1e128443-7dd3-456a-b2fe-93e2e0de0a14] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-7h5nx" [1e128443-7dd3-456a-b2fe-93e2e0de0a14] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.00551582s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdany-port2236053303/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726510137107682626" to /tmp/TestFunctionalparallelMountCmdany-port2236053303/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726510137107682626" to /tmp/TestFunctionalparallelMountCmdany-port2236053303/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726510137107682626" to /tmp/TestFunctionalparallelMountCmdany-port2236053303/001/test-1726510137107682626
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (275.235722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 18:08 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 18:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 18:08 test-1726510137107682626
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh cat /mount-9p/test-1726510137107682626
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-472457 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [166a68d7-c33c-4515-af2f-e434e6758ac8] Pending
helpers_test.go:344: "busybox-mount" [166a68d7-c33c-4515-af2f-e434e6758ac8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [166a68d7-c33c-4515-af2f-e434e6758ac8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [166a68d7-c33c-4515-af2f-e434e6758ac8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.004440494s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-472457 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdany-port2236053303/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "303.295653ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "58.285664ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "252.225467ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "51.504844ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdspecific-port31255731/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.300368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdspecific-port31255731/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh "sudo umount -f /mount-9p": exit status 1 (196.25475ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-472457 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdspecific-port31255731/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 service list -o json
functional_test.go:1494: Took "430.600952ms" to run "out/minikube-linux-amd64 -p functional-472457 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.184:31242
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup156620628/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup156620628/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup156620628/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T" /mount1: exit status 1 (253.127601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-472457 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup156620628/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup156620628/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-472457 /tmp/TestFunctionalparallelMountCmdVerifyCleanup156620628/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.184:31242
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-472457 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-472457
localhost/kicbase/echo-server:functional-472457
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-472457 image ls --format short --alsologtostderr:
I0916 18:09:25.196957  392370 out.go:345] Setting OutFile to fd 1 ...
I0916 18:09:25.197284  392370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.197296  392370 out.go:358] Setting ErrFile to fd 2...
I0916 18:09:25.197301  392370 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.197537  392370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
I0916 18:09:25.198208  392370 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.198337  392370 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.198815  392370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.198868  392370 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.216137  392370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
I0916 18:09:25.216748  392370 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.217354  392370 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.217376  392370 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.217791  392370 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.218010  392370 main.go:141] libmachine: (functional-472457) Calling .GetState
I0916 18:09:25.219973  392370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.220019  392370 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.235176  392370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45607
I0916 18:09:25.235686  392370 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.236218  392370 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.236238  392370 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.236587  392370 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.236827  392370 main.go:141] libmachine: (functional-472457) Calling .DriverName
I0916 18:09:25.237077  392370 ssh_runner.go:195] Run: systemctl --version
I0916 18:09:25.237102  392370 main.go:141] libmachine: (functional-472457) Calling .GetSSHHostname
I0916 18:09:25.240491  392370 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.241050  392370 main.go:141] libmachine: (functional-472457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:e6:55", ip: ""} in network mk-functional-472457: {Iface:virbr1 ExpiryTime:2024-09-16 19:06:08 +0000 UTC Type:0 Mac:52:54:00:ad:e6:55 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-472457 Clientid:01:52:54:00:ad:e6:55}
I0916 18:09:25.241076  392370 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined IP address 192.168.39.184 and MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.241224  392370 main.go:141] libmachine: (functional-472457) Calling .GetSSHPort
I0916 18:09:25.241379  392370 main.go:141] libmachine: (functional-472457) Calling .GetSSHKeyPath
I0916 18:09:25.241647  392370 main.go:141] libmachine: (functional-472457) Calling .GetSSHUsername
I0916 18:09:25.241760  392370 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/functional-472457/id_rsa Username:docker}
I0916 18:09:25.365601  392370 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 18:09:25.553365  392370 main.go:141] libmachine: Making call to close driver server
I0916 18:09:25.553392  392370 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:25.553697  392370 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:25.553718  392370 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:25.553727  392370 main.go:141] libmachine: Making call to close driver server
I0916 18:09:25.553736  392370 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:25.553790  392370 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
I0916 18:09:25.553965  392370 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:25.553976  392370 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-472457 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-472457  | f5f486655a24c | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/kicbase/echo-server           | functional-472457  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-472457 image ls --format table --alsologtostderr:
I0916 18:09:25.607136  392441 out.go:345] Setting OutFile to fd 1 ...
I0916 18:09:25.607287  392441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.607298  392441 out.go:358] Setting ErrFile to fd 2...
I0916 18:09:25.607303  392441 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.607521  392441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
I0916 18:09:25.608166  392441 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.608300  392441 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.608705  392441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.608765  392441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.625617  392441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35837
I0916 18:09:25.626173  392441 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.626853  392441 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.626877  392441 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.627385  392441 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.627616  392441 main.go:141] libmachine: (functional-472457) Calling .GetState
I0916 18:09:25.629611  392441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.629661  392441 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.646353  392441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
I0916 18:09:25.646906  392441 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.647593  392441 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.647621  392441 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.647987  392441 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.648213  392441 main.go:141] libmachine: (functional-472457) Calling .DriverName
I0916 18:09:25.648547  392441 ssh_runner.go:195] Run: systemctl --version
I0916 18:09:25.648588  392441 main.go:141] libmachine: (functional-472457) Calling .GetSSHHostname
I0916 18:09:25.651387  392441 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.651804  392441 main.go:141] libmachine: (functional-472457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:e6:55", ip: ""} in network mk-functional-472457: {Iface:virbr1 ExpiryTime:2024-09-16 19:06:08 +0000 UTC Type:0 Mac:52:54:00:ad:e6:55 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-472457 Clientid:01:52:54:00:ad:e6:55}
I0916 18:09:25.651832  392441 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined IP address 192.168.39.184 and MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.651895  392441 main.go:141] libmachine: (functional-472457) Calling .GetSSHPort
I0916 18:09:25.652023  392441 main.go:141] libmachine: (functional-472457) Calling .GetSSHKeyPath
I0916 18:09:25.652229  392441 main.go:141] libmachine: (functional-472457) Calling .GetSSHUsername
I0916 18:09:25.652358  392441 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/functional-472457/id_rsa Username:docker}
I0916 18:09:25.794509  392441 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 18:09:26.129275  392441 main.go:141] libmachine: Making call to close driver server
I0916 18:09:26.129298  392441 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:26.129613  392441 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:26.129641  392441 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:26.129658  392441 main.go:141] libmachine: Making call to close driver server
I0916 18:09:26.129656  392441 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
I0916 18:09:26.129665  392441 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:26.129947  392441 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:26.129963  392441 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-472457 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6
ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"f5f486655a24c1f3bffae4344ab1ce22d55dbee26d640bbab987d8ea76951111","repoDigests":["localhost/minikube-local-cache-test@sha256:f17a2a309a8db21a7f431425beb0
3bf5bcef03172582827137443841c4d5dde7"],"repoTags":["localhost/minikube-local-cache-test:functional-472457"],"size":"3330"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-472457"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{
"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id
":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["r
egistry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pa
use@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-472457 image ls --format json --alsologtostderr:
I0916 18:09:25.496633  392416 out.go:345] Setting OutFile to fd 1 ...
I0916 18:09:25.496938  392416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.496950  392416 out.go:358] Setting ErrFile to fd 2...
I0916 18:09:25.496957  392416 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.497232  392416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
I0916 18:09:25.497884  392416 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.497982  392416 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.498358  392416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.498395  392416 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.514115  392416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44191
I0916 18:09:25.514655  392416 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.515308  392416 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.515335  392416 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.515789  392416 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.516001  392416 main.go:141] libmachine: (functional-472457) Calling .GetState
I0916 18:09:25.518230  392416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.518292  392416 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.534687  392416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
I0916 18:09:25.535135  392416 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.535655  392416 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.535688  392416 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.536050  392416 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.536282  392416 main.go:141] libmachine: (functional-472457) Calling .DriverName
I0916 18:09:25.536529  392416 ssh_runner.go:195] Run: systemctl --version
I0916 18:09:25.536574  392416 main.go:141] libmachine: (functional-472457) Calling .GetSSHHostname
I0916 18:09:25.539907  392416 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.540409  392416 main.go:141] libmachine: (functional-472457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:e6:55", ip: ""} in network mk-functional-472457: {Iface:virbr1 ExpiryTime:2024-09-16 19:06:08 +0000 UTC Type:0 Mac:52:54:00:ad:e6:55 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-472457 Clientid:01:52:54:00:ad:e6:55}
I0916 18:09:25.540438  392416 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined IP address 192.168.39.184 and MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.540605  392416 main.go:141] libmachine: (functional-472457) Calling .GetSSHPort
I0916 18:09:25.540785  392416 main.go:141] libmachine: (functional-472457) Calling .GetSSHKeyPath
I0916 18:09:25.541006  392416 main.go:141] libmachine: (functional-472457) Calling .GetSSHUsername
I0916 18:09:25.541185  392416 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/functional-472457/id_rsa Username:docker}
I0916 18:09:25.696589  392416 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 18:09:26.038639  392416 main.go:141] libmachine: Making call to close driver server
I0916 18:09:26.038659  392416 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:26.038966  392416 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:26.039024  392416 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:26.039117  392416 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
I0916 18:09:26.039196  392416 main.go:141] libmachine: Making call to close driver server
I0916 18:09:26.039214  392416 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:26.039468  392416 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:26.039498  392416 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:26.039506  392416 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-472457 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-472457
size: "4943877"
- id: f5f486655a24c1f3bffae4344ab1ce22d55dbee26d640bbab987d8ea76951111
repoDigests:
- localhost/minikube-local-cache-test@sha256:f17a2a309a8db21a7f431425beb03bf5bcef03172582827137443841c4d5dde7
repoTags:
- localhost/minikube-local-cache-test:functional-472457
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-472457 image ls --format yaml --alsologtostderr:
I0916 18:09:25.197982  392371 out.go:345] Setting OutFile to fd 1 ...
I0916 18:09:25.198234  392371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.198239  392371 out.go:358] Setting ErrFile to fd 2...
I0916 18:09:25.198245  392371 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:25.198421  392371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
I0916 18:09:25.198969  392371 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.199075  392371 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:25.199460  392371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.199499  392371 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.216124  392371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39177
I0916 18:09:25.216748  392371 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.217453  392371 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.217480  392371 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.217821  392371 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.218010  392371 main.go:141] libmachine: (functional-472457) Calling .GetState
I0916 18:09:25.220084  392371 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:25.220127  392371 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:25.235207  392371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34079
I0916 18:09:25.235682  392371 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:25.236242  392371 main.go:141] libmachine: Using API Version  1
I0916 18:09:25.236269  392371 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:25.236579  392371 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:25.236762  392371 main.go:141] libmachine: (functional-472457) Calling .DriverName
I0916 18:09:25.236950  392371 ssh_runner.go:195] Run: systemctl --version
I0916 18:09:25.236976  392371 main.go:141] libmachine: (functional-472457) Calling .GetSSHHostname
I0916 18:09:25.240080  392371 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.240597  392371 main.go:141] libmachine: (functional-472457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:e6:55", ip: ""} in network mk-functional-472457: {Iface:virbr1 ExpiryTime:2024-09-16 19:06:08 +0000 UTC Type:0 Mac:52:54:00:ad:e6:55 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-472457 Clientid:01:52:54:00:ad:e6:55}
I0916 18:09:25.240615  392371 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined IP address 192.168.39.184 and MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:25.240665  392371 main.go:141] libmachine: (functional-472457) Calling .GetSSHPort
I0916 18:09:25.240863  392371 main.go:141] libmachine: (functional-472457) Calling .GetSSHKeyPath
I0916 18:09:25.241131  392371 main.go:141] libmachine: (functional-472457) Calling .GetSSHUsername
I0916 18:09:25.241321  392371 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/functional-472457/id_rsa Username:docker}
I0916 18:09:25.339726  392371 ssh_runner.go:195] Run: sudo crictl images --output json
I0916 18:09:25.436945  392371 main.go:141] libmachine: Making call to close driver server
I0916 18:09:25.436986  392371 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:25.437322  392371 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:25.437341  392371 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:25.437346  392371 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
I0916 18:09:25.437349  392371 main.go:141] libmachine: Making call to close driver server
I0916 18:09:25.437422  392371 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:25.437725  392371 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
I0916 18:09:25.437745  392371 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:25.437757  392371 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-472457 ssh pgrep buildkitd: exit status 1 (218.296522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image build -t localhost/my-image:functional-472457 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 image build -t localhost/my-image:functional-472457 testdata/build --alsologtostderr: (10.775028263s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-472457 image build -t localhost/my-image:functional-472457 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0120ac2015a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-472457
--> 3fd99f5ec7c
Successfully tagged localhost/my-image:functional-472457
3fd99f5ec7c52f23445b17f25866273c47c4987ffe27cf71b0b25c6f5db329a7
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-472457 image build -t localhost/my-image:functional-472457 testdata/build --alsologtostderr:
I0916 18:09:26.313623  392495 out.go:345] Setting OutFile to fd 1 ...
I0916 18:09:26.313753  392495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:26.313762  392495 out.go:358] Setting ErrFile to fd 2...
I0916 18:09:26.313766  392495 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 18:09:26.314033  392495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
I0916 18:09:26.314762  392495 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:26.315408  392495 config.go:182] Loaded profile config "functional-472457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0916 18:09:26.315783  392495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:26.315848  392495 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:26.333061  392495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40791
I0916 18:09:26.333650  392495 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:26.334451  392495 main.go:141] libmachine: Using API Version  1
I0916 18:09:26.334478  392495 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:26.334904  392495 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:26.335136  392495 main.go:141] libmachine: (functional-472457) Calling .GetState
I0916 18:09:26.337099  392495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0916 18:09:26.337159  392495 main.go:141] libmachine: Launching plugin server for driver kvm2
I0916 18:09:26.352842  392495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39849
I0916 18:09:26.353383  392495 main.go:141] libmachine: () Calling .GetVersion
I0916 18:09:26.354053  392495 main.go:141] libmachine: Using API Version  1
I0916 18:09:26.354088  392495 main.go:141] libmachine: () Calling .SetConfigRaw
I0916 18:09:26.354472  392495 main.go:141] libmachine: () Calling .GetMachineName
I0916 18:09:26.354678  392495 main.go:141] libmachine: (functional-472457) Calling .DriverName
I0916 18:09:26.354889  392495 ssh_runner.go:195] Run: systemctl --version
I0916 18:09:26.354915  392495 main.go:141] libmachine: (functional-472457) Calling .GetSSHHostname
I0916 18:09:26.358196  392495 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:26.358663  392495 main.go:141] libmachine: (functional-472457) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:e6:55", ip: ""} in network mk-functional-472457: {Iface:virbr1 ExpiryTime:2024-09-16 19:06:08 +0000 UTC Type:0 Mac:52:54:00:ad:e6:55 Iaid: IPaddr:192.168.39.184 Prefix:24 Hostname:functional-472457 Clientid:01:52:54:00:ad:e6:55}
I0916 18:09:26.358700  392495 main.go:141] libmachine: (functional-472457) DBG | domain functional-472457 has defined IP address 192.168.39.184 and MAC address 52:54:00:ad:e6:55 in network mk-functional-472457
I0916 18:09:26.358831  392495 main.go:141] libmachine: (functional-472457) Calling .GetSSHPort
I0916 18:09:26.359015  392495 main.go:141] libmachine: (functional-472457) Calling .GetSSHKeyPath
I0916 18:09:26.359166  392495 main.go:141] libmachine: (functional-472457) Calling .GetSSHUsername
I0916 18:09:26.359292  392495 sshutil.go:53] new ssh client: &{IP:192.168.39.184 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/functional-472457/id_rsa Username:docker}
I0916 18:09:26.463026  392495 build_images.go:161] Building image from path: /tmp/build.3309507663.tar
I0916 18:09:26.463103  392495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 18:09:26.494452  392495 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3309507663.tar
I0916 18:09:26.509630  392495 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3309507663.tar: stat -c "%s %y" /var/lib/minikube/build/build.3309507663.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3309507663.tar': No such file or directory
I0916 18:09:26.509679  392495 ssh_runner.go:362] scp /tmp/build.3309507663.tar --> /var/lib/minikube/build/build.3309507663.tar (3072 bytes)
I0916 18:09:26.574254  392495 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3309507663
I0916 18:09:26.602835  392495 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3309507663 -xf /var/lib/minikube/build/build.3309507663.tar
I0916 18:09:26.632694  392495 crio.go:315] Building image: /var/lib/minikube/build/build.3309507663
I0916 18:09:26.632792  392495 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-472457 /var/lib/minikube/build/build.3309507663 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0916 18:09:36.986924  392495 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-472457 /var/lib/minikube/build/build.3309507663 --cgroup-manager=cgroupfs: (10.354097284s)
I0916 18:09:36.987006  392495 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3309507663
I0916 18:09:37.002737  392495 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3309507663.tar
I0916 18:09:37.030494  392495 build_images.go:217] Built localhost/my-image:functional-472457 from /tmp/build.3309507663.tar
I0916 18:09:37.030545  392495 build_images.go:133] succeeded building to: functional-472457
I0916 18:09:37.030552  392495 build_images.go:134] failed building to: 
I0916 18:09:37.030581  392495 main.go:141] libmachine: Making call to close driver server
I0916 18:09:37.030601  392495 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:37.030890  392495 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:37.030908  392495 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:37.030917  392495 main.go:141] libmachine: Making call to close driver server
I0916 18:09:37.030925  392495 main.go:141] libmachine: (functional-472457) Calling .Close
I0916 18:09:37.031206  392495 main.go:141] libmachine: Successfully made call to close driver server
I0916 18:09:37.031222  392495 main.go:141] libmachine: Making call to close connection to plugin binary
I0916 18:09:37.031267  392495 main.go:141] libmachine: (functional-472457) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.959931019s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-472457
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image load --daemon kicbase/echo-server:functional-472457 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 image load --daemon kicbase/echo-server:functional-472457 --alsologtostderr: (2.856820735s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image load --daemon kicbase/echo-server:functional-472457 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/09/16 18:09:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-472457
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image load --daemon kicbase/echo-server:functional-472457 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image save kicbase/echo-server:functional-472457 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-472457 image save kicbase/echo-server:functional-472457 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.491123089s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image rm kicbase/echo-server:functional-472457 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-472457
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-472457 image save --daemon kicbase/echo-server:functional-472457 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-472457
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-472457
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-472457
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-472457
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-365438 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-365438 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.830344568s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-365438 -- rollout status deployment/busybox: (4.439231107s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-4hs24 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8lxm5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8whmx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-4hs24 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8lxm5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8whmx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-4hs24 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8lxm5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8whmx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-4hs24 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-4hs24 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8lxm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8lxm5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8whmx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-365438 -- exec busybox-7dff88458-8whmx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-365438 -v=7 --alsologtostderr
E0916 18:13:56.984321  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:56.991203  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:57.002585  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:57.023995  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:57.065429  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:57.146921  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:57.308502  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:57.630145  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:58.271986  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:13:59.553515  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:14:02.115892  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:14:07.237874  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-365438 -v=7 --alsologtostderr: (57.752529446s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-365438 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp testdata/cp-test.txt ha-365438:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438:/home/docker/cp-test.txt ha-365438-m02:/home/docker/cp-test_ha-365438_ha-365438-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test_ha-365438_ha-365438-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438:/home/docker/cp-test.txt ha-365438-m03:/home/docker/cp-test_ha-365438_ha-365438-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test_ha-365438_ha-365438-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438:/home/docker/cp-test.txt ha-365438-m04:/home/docker/cp-test_ha-365438_ha-365438-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test_ha-365438_ha-365438-m04.txt"
E0916 18:14:17.479977  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp testdata/cp-test.txt ha-365438-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m02:/home/docker/cp-test.txt ha-365438:/home/docker/cp-test_ha-365438-m02_ha-365438.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test_ha-365438-m02_ha-365438.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m02:/home/docker/cp-test.txt ha-365438-m03:/home/docker/cp-test_ha-365438-m02_ha-365438-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test_ha-365438-m02_ha-365438-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m02:/home/docker/cp-test.txt ha-365438-m04:/home/docker/cp-test_ha-365438-m02_ha-365438-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test_ha-365438-m02_ha-365438-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp testdata/cp-test.txt ha-365438-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt ha-365438:/home/docker/cp-test_ha-365438-m03_ha-365438.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test_ha-365438-m03_ha-365438.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt ha-365438-m02:/home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test_ha-365438-m03_ha-365438-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m03:/home/docker/cp-test.txt ha-365438-m04:/home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test_ha-365438-m03_ha-365438-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp testdata/cp-test.txt ha-365438-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1185444256/001/cp-test_ha-365438-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt ha-365438:/home/docker/cp-test_ha-365438-m04_ha-365438.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438 "sudo cat /home/docker/cp-test_ha-365438-m04_ha-365438.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt ha-365438-m02:/home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m02 "sudo cat /home/docker/cp-test_ha-365438-m04_ha-365438-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 cp ha-365438-m04:/home/docker/cp-test.txt ha-365438-m03:/home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 ssh -n ha-365438-m03 "sudo cat /home/docker/cp-test_ha-365438-m04_ha-365438-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.481837763s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 node delete m03 -v=7 --alsologtostderr
E0916 18:23:56.985149  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-365438 node delete m03 -v=7 --alsologtostderr: (16.01768645s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (357.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-365438 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 18:28:56.985233  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
E0916 18:30:20.049943  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-365438 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m56.612597446s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (357.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-365438 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-365438 --control-plane -v=7 --alsologtostderr: (1m17.093004216s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-365438 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-514332 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0916 18:33:56.983618  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-514332 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.122723967s)
--- PASS: TestJSONOutput/start/Command (80.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-514332 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-514332 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-514332 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-514332 --output=json --user=testUser: (7.367319182s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-188003 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-188003 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.760092ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2cc70070-ab92-4d45-b490-4de7a7be04fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-188003] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"650c8869-48b4-4cb7-b3ab-fade1993c419","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19649"}}
	{"specversion":"1.0","id":"17e47ea0-0922-4fce-aa70-2daba9dd6306","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5dffbc39-79c9-4c47-82b9-22db1d2047a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig"}}
	{"specversion":"1.0","id":"6a269fe6-d5fe-47c5-979f-fd57ccc0082a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube"}}
	{"specversion":"1.0","id":"0e5e057c-1b0a-4225-9e49-aee566df5b3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0a64683a-6710-4ce5-b60b-6aa9b675d83a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bdd4396c-f18c-40c4-8f8f-ca56300dbded","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-188003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-188003
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (91.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-884157 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-884157 --driver=kvm2  --container-runtime=crio: (43.98702411s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-896628 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-896628 --driver=kvm2  --container-runtime=crio: (45.249507053s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-884157
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-896628
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-896628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-896628
helpers_test.go:175: Cleaning up "first-884157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-884157
--- PASS: TestMinikubeProfile (91.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-335430 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-335430 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.174272478s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-335430 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-335430 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-353605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-353605 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.874511955s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-335430 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-353605
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-353605: (1.278884381s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-353605
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-353605: (22.908052613s)
--- PASS: TestMountStart/serial/RestartStopped (23.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353605 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353605 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-588591 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0916 18:38:56.983396  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-588591 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.094079445s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-588591 -- rollout status deployment/busybox: (3.712055568s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-npxwd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-qsddq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-npxwd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-qsddq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-npxwd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-qsddq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-npxwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-npxwd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-qsddq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-588591 -- exec busybox-7dff88458-qsddq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-588591 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-588591 -v 3 --alsologtostderr: (53.37714582s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-588591 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp testdata/cp-test.txt multinode-588591:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3158127858/001/cp-test_multinode-588591.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591:/home/docker/cp-test.txt multinode-588591-m02:/home/docker/cp-test_multinode-588591_multinode-588591-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m02 "sudo cat /home/docker/cp-test_multinode-588591_multinode-588591-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591:/home/docker/cp-test.txt multinode-588591-m03:/home/docker/cp-test_multinode-588591_multinode-588591-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m03 "sudo cat /home/docker/cp-test_multinode-588591_multinode-588591-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp testdata/cp-test.txt multinode-588591-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3158127858/001/cp-test_multinode-588591-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt multinode-588591:/home/docker/cp-test_multinode-588591-m02_multinode-588591.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591 "sudo cat /home/docker/cp-test_multinode-588591-m02_multinode-588591.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591-m02:/home/docker/cp-test.txt multinode-588591-m03:/home/docker/cp-test_multinode-588591-m02_multinode-588591-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m03 "sudo cat /home/docker/cp-test_multinode-588591-m02_multinode-588591-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp testdata/cp-test.txt multinode-588591-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3158127858/001/cp-test_multinode-588591-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt multinode-588591:/home/docker/cp-test_multinode-588591-m03_multinode-588591.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591 "sudo cat /home/docker/cp-test_multinode-588591-m03_multinode-588591.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 cp multinode-588591-m03:/home/docker/cp-test.txt multinode-588591-m02:/home/docker/cp-test_multinode-588591-m03_multinode-588591-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 ssh -n multinode-588591-m02 "sudo cat /home/docker/cp-test_multinode-588591-m03_multinode-588591-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-588591 node stop m03: (1.441454011s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-588591 status: exit status 7 (428.798948ms)

                                                
                                                
-- stdout --
	multinode-588591
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-588591-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-588591-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-588591 status --alsologtostderr: exit status 7 (434.39023ms)

                                                
                                                
-- stdout --
	multinode-588591
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-588591-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-588591-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 18:41:31.142768  410453 out.go:345] Setting OutFile to fd 1 ...
	I0916 18:41:31.142891  410453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:41:31.142900  410453 out.go:358] Setting ErrFile to fd 2...
	I0916 18:41:31.142904  410453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 18:41:31.143119  410453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19649-371203/.minikube/bin
	I0916 18:41:31.143300  410453 out.go:352] Setting JSON to false
	I0916 18:41:31.143335  410453 mustload.go:65] Loading cluster: multinode-588591
	I0916 18:41:31.143462  410453 notify.go:220] Checking for updates...
	I0916 18:41:31.143905  410453 config.go:182] Loaded profile config "multinode-588591": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0916 18:41:31.143928  410453 status.go:255] checking status of multinode-588591 ...
	I0916 18:41:31.144443  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.144498  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.160539  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0916 18:41:31.161079  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.161883  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.161913  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.162254  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.162464  410453 main.go:141] libmachine: (multinode-588591) Calling .GetState
	I0916 18:41:31.164222  410453 status.go:330] multinode-588591 host status = "Running" (err=<nil>)
	I0916 18:41:31.164247  410453 host.go:66] Checking if "multinode-588591" exists ...
	I0916 18:41:31.164709  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.164767  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.180829  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I0916 18:41:31.181571  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.182190  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.182217  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.182601  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.182832  410453 main.go:141] libmachine: (multinode-588591) Calling .GetIP
	I0916 18:41:31.186220  410453 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:41:31.186672  410453 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:41:31.186705  410453 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:41:31.186872  410453 host.go:66] Checking if "multinode-588591" exists ...
	I0916 18:41:31.187276  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.187363  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.203543  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36425
	I0916 18:41:31.204110  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.204664  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.204709  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.205080  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.205299  410453 main.go:141] libmachine: (multinode-588591) Calling .DriverName
	I0916 18:41:31.205501  410453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:41:31.205532  410453 main.go:141] libmachine: (multinode-588591) Calling .GetSSHHostname
	I0916 18:41:31.208554  410453 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:41:31.209023  410453 main.go:141] libmachine: (multinode-588591) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:80:30", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:38:41 +0000 UTC Type:0 Mac:52:54:00:5e:80:30 Iaid: IPaddr:192.168.39.90 Prefix:24 Hostname:multinode-588591 Clientid:01:52:54:00:5e:80:30}
	I0916 18:41:31.209054  410453 main.go:141] libmachine: (multinode-588591) DBG | domain multinode-588591 has defined IP address 192.168.39.90 and MAC address 52:54:00:5e:80:30 in network mk-multinode-588591
	I0916 18:41:31.209266  410453 main.go:141] libmachine: (multinode-588591) Calling .GetSSHPort
	I0916 18:41:31.209474  410453 main.go:141] libmachine: (multinode-588591) Calling .GetSSHKeyPath
	I0916 18:41:31.209692  410453 main.go:141] libmachine: (multinode-588591) Calling .GetSSHUsername
	I0916 18:41:31.209847  410453 sshutil.go:53] new ssh client: &{IP:192.168.39.90 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591/id_rsa Username:docker}
	I0916 18:41:31.289177  410453 ssh_runner.go:195] Run: systemctl --version
	I0916 18:41:31.296244  410453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:41:31.313019  410453 kubeconfig.go:125] found "multinode-588591" server: "https://192.168.39.90:8443"
	I0916 18:41:31.313069  410453 api_server.go:166] Checking apiserver status ...
	I0916 18:41:31.313121  410453 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 18:41:31.329124  410453 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1058/cgroup
	W0916 18:41:31.341267  410453 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1058/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0916 18:41:31.341331  410453 ssh_runner.go:195] Run: ls
	I0916 18:41:31.345974  410453 api_server.go:253] Checking apiserver healthz at https://192.168.39.90:8443/healthz ...
	I0916 18:41:31.350283  410453 api_server.go:279] https://192.168.39.90:8443/healthz returned 200:
	ok
	I0916 18:41:31.350313  410453 status.go:422] multinode-588591 apiserver status = Running (err=<nil>)
	I0916 18:41:31.350324  410453 status.go:257] multinode-588591 status: &{Name:multinode-588591 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:41:31.350343  410453 status.go:255] checking status of multinode-588591-m02 ...
	I0916 18:41:31.350763  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.350814  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.367630  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0916 18:41:31.368066  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.368642  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.368669  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.369112  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.369401  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .GetState
	I0916 18:41:31.371418  410453 status.go:330] multinode-588591-m02 host status = "Running" (err=<nil>)
	I0916 18:41:31.371439  410453 host.go:66] Checking if "multinode-588591-m02" exists ...
	I0916 18:41:31.371747  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.371789  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.387933  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41329
	I0916 18:41:31.388375  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.388954  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.388981  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.389316  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.389541  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .GetIP
	I0916 18:41:31.392178  410453 main.go:141] libmachine: (multinode-588591-m02) DBG | domain multinode-588591-m02 has defined MAC address 52:54:00:90:b8:02 in network mk-multinode-588591
	I0916 18:41:31.392631  410453 main.go:141] libmachine: (multinode-588591-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b8:02", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:39:43 +0000 UTC Type:0 Mac:52:54:00:90:b8:02 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-588591-m02 Clientid:01:52:54:00:90:b8:02}
	I0916 18:41:31.392657  410453 main.go:141] libmachine: (multinode-588591-m02) DBG | domain multinode-588591-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:90:b8:02 in network mk-multinode-588591
	I0916 18:41:31.392881  410453 host.go:66] Checking if "multinode-588591-m02" exists ...
	I0916 18:41:31.393388  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.393438  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.409701  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0916 18:41:31.410185  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.410732  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.410759  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.411080  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.411324  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .DriverName
	I0916 18:41:31.411494  410453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 18:41:31.411513  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .GetSSHHostname
	I0916 18:41:31.414976  410453 main.go:141] libmachine: (multinode-588591-m02) DBG | domain multinode-588591-m02 has defined MAC address 52:54:00:90:b8:02 in network mk-multinode-588591
	I0916 18:41:31.415460  410453 main.go:141] libmachine: (multinode-588591-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:b8:02", ip: ""} in network mk-multinode-588591: {Iface:virbr1 ExpiryTime:2024-09-16 19:39:43 +0000 UTC Type:0 Mac:52:54:00:90:b8:02 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:multinode-588591-m02 Clientid:01:52:54:00:90:b8:02}
	I0916 18:41:31.415484  410453 main.go:141] libmachine: (multinode-588591-m02) DBG | domain multinode-588591-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:90:b8:02 in network mk-multinode-588591
	I0916 18:41:31.415699  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .GetSSHPort
	I0916 18:41:31.415917  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .GetSSHKeyPath
	I0916 18:41:31.416064  410453 main.go:141] libmachine: (multinode-588591-m02) Calling .GetSSHUsername
	I0916 18:41:31.416255  410453 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19649-371203/.minikube/machines/multinode-588591-m02/id_rsa Username:docker}
	I0916 18:41:31.492720  410453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 18:41:31.508123  410453 status.go:257] multinode-588591-m02 status: &{Name:multinode-588591-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 18:41:31.508176  410453 status.go:255] checking status of multinode-588591-m03 ...
	I0916 18:41:31.508512  410453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0916 18:41:31.508552  410453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0916 18:41:31.525636  410453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40097
	I0916 18:41:31.526138  410453 main.go:141] libmachine: () Calling .GetVersion
	I0916 18:41:31.526686  410453 main.go:141] libmachine: Using API Version  1
	I0916 18:41:31.526711  410453 main.go:141] libmachine: () Calling .SetConfigRaw
	I0916 18:41:31.527071  410453 main.go:141] libmachine: () Calling .GetMachineName
	I0916 18:41:31.527264  410453 main.go:141] libmachine: (multinode-588591-m03) Calling .GetState
	I0916 18:41:31.529060  410453 status.go:330] multinode-588591-m03 host status = "Stopped" (err=<nil>)
	I0916 18:41:31.529084  410453 status.go:343] host is not running, skipping remaining checks
	I0916 18:41:31.529092  410453 status.go:257] multinode-588591-m03 status: &{Name:multinode-588591-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-588591 node start m03 -v=7 --alsologtostderr: (39.665344488s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-588591 node delete m03: (1.486078089s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (200.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-588591 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-588591 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.202140244s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-588591 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (200.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-588591
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-588591-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-588591-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.814708ms)

                                                
                                                
-- stdout --
	* [multinode-588591-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-588591-m02' is duplicated with machine name 'multinode-588591-m02' in profile 'multinode-588591'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-588591-m03 --driver=kvm2  --container-runtime=crio
E0916 18:53:56.985140  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-588591-m03 --driver=kvm2  --container-runtime=crio: (45.150914704s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-588591
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-588591: exit status 80 (216.92436ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-588591 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-588591-m03 already exists in multinode-588591-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-588591-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.31s)

                                                
                                    
x
+
TestScheduledStopUnix (114.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-544918 --memory=2048 --driver=kvm2  --container-runtime=crio
E0916 18:58:56.983212  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-544918 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.317162999s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-544918 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-544918 -n scheduled-stop-544918
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-544918 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-544918 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-544918 -n scheduled-stop-544918
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-544918
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-544918 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-544918
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-544918: exit status 7 (76.04454ms)

                                                
                                                
-- stdout --
	scheduled-stop-544918
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-544918 -n scheduled-stop-544918
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-544918 -n scheduled-stop-544918: exit status 7 (66.085215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-544918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-544918
--- PASS: TestScheduledStopUnix (114.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (201.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2581358782 start -p running-upgrade-815439 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2581358782 start -p running-upgrade-815439 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m4.295281657s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-815439 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-815439 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m13.86662333s)
helpers_test.go:175: Cleaning up "running-upgrade-815439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-815439
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-815439: (1.164049233s)
--- PASS: TestRunningBinaryUpgrade (201.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-695257 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-695257 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.555414ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-695257] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19649
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19649-371203/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19649-371203/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-695257 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-695257 --driver=kvm2  --container-runtime=crio: (1m35.81723585s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-695257 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2340083195 start -p stopped-upgrade-841069 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2340083195 start -p stopped-upgrade-841069 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m10.294891987s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2340083195 -p stopped-upgrade-841069 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2340083195 -p stopped-upgrade-841069 stop: (1.457891466s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-841069 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-841069 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.112445909s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-695257 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-695257 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.38293036s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-695257 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-695257 status -o json: exit status 2 (264.653036ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-695257","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-695257
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-695257: (1.127578561s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-695257 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-695257 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.755261899s)
--- PASS: TestNoKubernetes/serial/Start (28.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-695257 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-695257 "sudo systemctl is-active --quiet service kubelet": exit status 1 (206.779065ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0916 19:03:40.054080  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.786042879s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-695257
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-695257: (1.395008452s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (27.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-695257 --driver=kvm2  --container-runtime=crio
E0916 19:03:56.983694  378463 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19649-371203/.minikube/profiles/functional-472457/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-695257 --driver=kvm2  --container-runtime=crio: (27.29997323s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (27.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-841069
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-695257 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-695257 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.005034ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (106.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-671192 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-671192 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.98629847s)
--- PASS: TestPause/serial/Start (106.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-671192 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-671192 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (58.786023289s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (58.81s)

                                                
                                    
x
+
TestPause/serial/Pause (0.81s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-671192 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-671192 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-671192 --output=json --layout=cluster: exit status 2 (256.706105ms)

                                                
                                                
-- stdout --
	{"Name":"pause-671192","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-671192","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-671192 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-671192 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-671192 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-671192 --alsologtostderr -v=5: (1.158182703s)
--- PASS: TestPause/serial/DeletePaused (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    

Test skip (32/213)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard