Test Report: KVM_Linux_crio 19712

                    
                      c4dd788a1c1ea09a0f3bb20836a8b75126e684b1:2024-09-27:36398
                    
                

Test fail (12/207)

x
+
TestAddons/Setup (2400.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-511364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-511364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: signal: killed (39m59.958762025s)

                                                
                                                
-- stdout --
	* [addons-511364] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-511364" primary control-plane node in "addons-511364" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	  - Using image docker.io/registry:2.8.3
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	* Verifying registry addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-511364 service yakd-dashboard -n yakd-dashboard
	
	* Verifying ingress addon...
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-511364 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, nvidia-device-plugin, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 16:56:40.137155   19099 out.go:345] Setting OutFile to fd 1 ...
	I0927 16:56:40.137404   19099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:40.137420   19099 out.go:358] Setting ErrFile to fd 2...
	I0927 16:56:40.137425   19099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:40.137652   19099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 16:56:40.138277   19099 out.go:352] Setting JSON to false
	I0927 16:56:40.139118   19099 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2345,"bootTime":1727453855,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 16:56:40.139181   19099 start.go:139] virtualization: kvm guest
	I0927 16:56:40.141180   19099 out.go:177] * [addons-511364] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 16:56:40.142512   19099 notify.go:220] Checking for updates...
	I0927 16:56:40.142519   19099 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 16:56:40.143920   19099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 16:56:40.145254   19099 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 16:56:40.146851   19099 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 16:56:40.148623   19099 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 16:56:40.150233   19099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 16:56:40.151835   19099 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 16:56:40.185286   19099 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 16:56:40.186850   19099 start.go:297] selected driver: kvm2
	I0927 16:56:40.186869   19099 start.go:901] validating driver "kvm2" against <nil>
	I0927 16:56:40.186881   19099 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 16:56:40.187584   19099 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:56:40.187658   19099 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 16:56:40.202810   19099 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 16:56:40.202854   19099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 16:56:40.203103   19099 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 16:56:40.203130   19099 cni.go:84] Creating CNI manager for ""
	I0927 16:56:40.203168   19099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 16:56:40.203189   19099 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 16:56:40.203242   19099 start.go:340] cluster config:
	{Name:addons-511364 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-511364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:56:40.203377   19099 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:56:40.205536   19099 out.go:177] * Starting "addons-511364" primary control-plane node in "addons-511364" cluster
	I0927 16:56:40.207175   19099 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 16:56:40.207238   19099 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 16:56:40.207250   19099 cache.go:56] Caching tarball of preloaded images
	I0927 16:56:40.207365   19099 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 16:56:40.207377   19099 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 16:56:40.207715   19099 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/config.json ...
	I0927 16:56:40.207737   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/config.json: {Name:mk7e819e5a01edda8713f83071cb4e72703ade98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:56:40.207882   19099 start.go:360] acquireMachinesLock for addons-511364: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 16:56:40.207934   19099 start.go:364] duration metric: took 38.508µs to acquireMachinesLock for "addons-511364"
	I0927 16:56:40.207953   19099 start.go:93] Provisioning new machine with config: &{Name:addons-511364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-511364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 16:56:40.208008   19099 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 16:56:40.209836   19099 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0927 16:56:40.210016   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:56:40.210058   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:56:40.224570   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34507
	I0927 16:56:40.224992   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:56:40.225589   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:56:40.225609   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:56:40.226038   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:56:40.226314   19099 main.go:141] libmachine: (addons-511364) Calling .GetMachineName
	I0927 16:56:40.226540   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:56:40.226750   19099 start.go:159] libmachine.API.Create for "addons-511364" (driver="kvm2")
	I0927 16:56:40.226781   19099 client.go:168] LocalClient.Create starting
	I0927 16:56:40.226838   19099 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 16:56:40.319566   19099 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 16:56:40.544556   19099 main.go:141] libmachine: Running pre-create checks...
	I0927 16:56:40.544581   19099 main.go:141] libmachine: (addons-511364) Calling .PreCreateCheck
	I0927 16:56:40.545098   19099 main.go:141] libmachine: (addons-511364) Calling .GetConfigRaw
	I0927 16:56:40.545610   19099 main.go:141] libmachine: Creating machine...
	I0927 16:56:40.545627   19099 main.go:141] libmachine: (addons-511364) Calling .Create
	I0927 16:56:40.545787   19099 main.go:141] libmachine: (addons-511364) Creating KVM machine...
	I0927 16:56:40.546991   19099 main.go:141] libmachine: (addons-511364) DBG | found existing default KVM network
	I0927 16:56:40.547712   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:40.547529   19121 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I0927 16:56:40.547740   19099 main.go:141] libmachine: (addons-511364) DBG | created network xml: 
	I0927 16:56:40.547754   19099 main.go:141] libmachine: (addons-511364) DBG | <network>
	I0927 16:56:40.547761   19099 main.go:141] libmachine: (addons-511364) DBG |   <name>mk-addons-511364</name>
	I0927 16:56:40.547769   19099 main.go:141] libmachine: (addons-511364) DBG |   <dns enable='no'/>
	I0927 16:56:40.547779   19099 main.go:141] libmachine: (addons-511364) DBG |   
	I0927 16:56:40.547788   19099 main.go:141] libmachine: (addons-511364) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 16:56:40.547797   19099 main.go:141] libmachine: (addons-511364) DBG |     <dhcp>
	I0927 16:56:40.547803   19099 main.go:141] libmachine: (addons-511364) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 16:56:40.547810   19099 main.go:141] libmachine: (addons-511364) DBG |     </dhcp>
	I0927 16:56:40.547814   19099 main.go:141] libmachine: (addons-511364) DBG |   </ip>
	I0927 16:56:40.547826   19099 main.go:141] libmachine: (addons-511364) DBG |   
	I0927 16:56:40.547833   19099 main.go:141] libmachine: (addons-511364) DBG | </network>
	I0927 16:56:40.547841   19099 main.go:141] libmachine: (addons-511364) DBG | 
	I0927 16:56:40.554087   19099 main.go:141] libmachine: (addons-511364) DBG | trying to create private KVM network mk-addons-511364 192.168.39.0/24...
	I0927 16:56:40.620801   19099 main.go:141] libmachine: (addons-511364) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364 ...
	I0927 16:56:40.620838   19099 main.go:141] libmachine: (addons-511364) DBG | private KVM network mk-addons-511364 192.168.39.0/24 created
	I0927 16:56:40.620851   19099 main.go:141] libmachine: (addons-511364) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 16:56:40.620877   19099 main.go:141] libmachine: (addons-511364) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 16:56:40.620900   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:40.620545   19121 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 16:56:40.888017   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:40.887867   19121 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa...
	I0927 16:56:41.102331   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:41.102169   19121 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/addons-511364.rawdisk...
	I0927 16:56:41.102365   19099 main.go:141] libmachine: (addons-511364) DBG | Writing magic tar header
	I0927 16:56:41.102401   19099 main.go:141] libmachine: (addons-511364) DBG | Writing SSH key tar header
	I0927 16:56:41.102424   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:41.102285   19121 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364 ...
	I0927 16:56:41.102445   19099 main.go:141] libmachine: (addons-511364) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364 (perms=drwx------)
	I0927 16:56:41.102456   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364
	I0927 16:56:41.102479   19099 main.go:141] libmachine: (addons-511364) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 16:56:41.102495   19099 main.go:141] libmachine: (addons-511364) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 16:56:41.102505   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 16:56:41.102531   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 16:56:41.102543   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 16:56:41.102553   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 16:56:41.102560   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home/jenkins
	I0927 16:56:41.102572   19099 main.go:141] libmachine: (addons-511364) DBG | Checking permissions on dir: /home
	I0927 16:56:41.102580   19099 main.go:141] libmachine: (addons-511364) DBG | Skipping /home - not owner
	I0927 16:56:41.102589   19099 main.go:141] libmachine: (addons-511364) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 16:56:41.102599   19099 main.go:141] libmachine: (addons-511364) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 16:56:41.102611   19099 main.go:141] libmachine: (addons-511364) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 16:56:41.102621   19099 main.go:141] libmachine: (addons-511364) Creating domain...
	I0927 16:56:41.103567   19099 main.go:141] libmachine: (addons-511364) define libvirt domain using xml: 
	I0927 16:56:41.103578   19099 main.go:141] libmachine: (addons-511364) <domain type='kvm'>
	I0927 16:56:41.103593   19099 main.go:141] libmachine: (addons-511364)   <name>addons-511364</name>
	I0927 16:56:41.103600   19099 main.go:141] libmachine: (addons-511364)   <memory unit='MiB'>4000</memory>
	I0927 16:56:41.103607   19099 main.go:141] libmachine: (addons-511364)   <vcpu>2</vcpu>
	I0927 16:56:41.103614   19099 main.go:141] libmachine: (addons-511364)   <features>
	I0927 16:56:41.103626   19099 main.go:141] libmachine: (addons-511364)     <acpi/>
	I0927 16:56:41.103635   19099 main.go:141] libmachine: (addons-511364)     <apic/>
	I0927 16:56:41.103644   19099 main.go:141] libmachine: (addons-511364)     <pae/>
	I0927 16:56:41.103648   19099 main.go:141] libmachine: (addons-511364)     
	I0927 16:56:41.103653   19099 main.go:141] libmachine: (addons-511364)   </features>
	I0927 16:56:41.103665   19099 main.go:141] libmachine: (addons-511364)   <cpu mode='host-passthrough'>
	I0927 16:56:41.103674   19099 main.go:141] libmachine: (addons-511364)   
	I0927 16:56:41.103689   19099 main.go:141] libmachine: (addons-511364)   </cpu>
	I0927 16:56:41.103699   19099 main.go:141] libmachine: (addons-511364)   <os>
	I0927 16:56:41.103709   19099 main.go:141] libmachine: (addons-511364)     <type>hvm</type>
	I0927 16:56:41.103722   19099 main.go:141] libmachine: (addons-511364)     <boot dev='cdrom'/>
	I0927 16:56:41.103736   19099 main.go:141] libmachine: (addons-511364)     <boot dev='hd'/>
	I0927 16:56:41.103746   19099 main.go:141] libmachine: (addons-511364)     <bootmenu enable='no'/>
	I0927 16:56:41.103753   19099 main.go:141] libmachine: (addons-511364)   </os>
	I0927 16:56:41.103758   19099 main.go:141] libmachine: (addons-511364)   <devices>
	I0927 16:56:41.103764   19099 main.go:141] libmachine: (addons-511364)     <disk type='file' device='cdrom'>
	I0927 16:56:41.103775   19099 main.go:141] libmachine: (addons-511364)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/boot2docker.iso'/>
	I0927 16:56:41.103782   19099 main.go:141] libmachine: (addons-511364)       <target dev='hdc' bus='scsi'/>
	I0927 16:56:41.103788   19099 main.go:141] libmachine: (addons-511364)       <readonly/>
	I0927 16:56:41.103797   19099 main.go:141] libmachine: (addons-511364)     </disk>
	I0927 16:56:41.103809   19099 main.go:141] libmachine: (addons-511364)     <disk type='file' device='disk'>
	I0927 16:56:41.103820   19099 main.go:141] libmachine: (addons-511364)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 16:56:41.103833   19099 main.go:141] libmachine: (addons-511364)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/addons-511364.rawdisk'/>
	I0927 16:56:41.103849   19099 main.go:141] libmachine: (addons-511364)       <target dev='hda' bus='virtio'/>
	I0927 16:56:41.103860   19099 main.go:141] libmachine: (addons-511364)     </disk>
	I0927 16:56:41.103868   19099 main.go:141] libmachine: (addons-511364)     <interface type='network'>
	I0927 16:56:41.103876   19099 main.go:141] libmachine: (addons-511364)       <source network='mk-addons-511364'/>
	I0927 16:56:41.103880   19099 main.go:141] libmachine: (addons-511364)       <model type='virtio'/>
	I0927 16:56:41.103890   19099 main.go:141] libmachine: (addons-511364)     </interface>
	I0927 16:56:41.103899   19099 main.go:141] libmachine: (addons-511364)     <interface type='network'>
	I0927 16:56:41.103911   19099 main.go:141] libmachine: (addons-511364)       <source network='default'/>
	I0927 16:56:41.103918   19099 main.go:141] libmachine: (addons-511364)       <model type='virtio'/>
	I0927 16:56:41.103929   19099 main.go:141] libmachine: (addons-511364)     </interface>
	I0927 16:56:41.103938   19099 main.go:141] libmachine: (addons-511364)     <serial type='pty'>
	I0927 16:56:41.103964   19099 main.go:141] libmachine: (addons-511364)       <target port='0'/>
	I0927 16:56:41.103985   19099 main.go:141] libmachine: (addons-511364)     </serial>
	I0927 16:56:41.104006   19099 main.go:141] libmachine: (addons-511364)     <console type='pty'>
	I0927 16:56:41.104037   19099 main.go:141] libmachine: (addons-511364)       <target type='serial' port='0'/>
	I0927 16:56:41.104049   19099 main.go:141] libmachine: (addons-511364)     </console>
	I0927 16:56:41.104058   19099 main.go:141] libmachine: (addons-511364)     <rng model='virtio'>
	I0927 16:56:41.104067   19099 main.go:141] libmachine: (addons-511364)       <backend model='random'>/dev/random</backend>
	I0927 16:56:41.104071   19099 main.go:141] libmachine: (addons-511364)     </rng>
	I0927 16:56:41.104076   19099 main.go:141] libmachine: (addons-511364)     
	I0927 16:56:41.104083   19099 main.go:141] libmachine: (addons-511364)     
	I0927 16:56:41.104087   19099 main.go:141] libmachine: (addons-511364)   </devices>
	I0927 16:56:41.104094   19099 main.go:141] libmachine: (addons-511364) </domain>
	I0927 16:56:41.104101   19099 main.go:141] libmachine: (addons-511364) 
	I0927 16:56:41.110724   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:11:f2:1d in network default
	I0927 16:56:41.111248   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:41.111265   19099 main.go:141] libmachine: (addons-511364) Ensuring networks are active...
	I0927 16:56:41.112057   19099 main.go:141] libmachine: (addons-511364) Ensuring network default is active
	I0927 16:56:41.112317   19099 main.go:141] libmachine: (addons-511364) Ensuring network mk-addons-511364 is active
	I0927 16:56:41.112930   19099 main.go:141] libmachine: (addons-511364) Getting domain xml...
	I0927 16:56:41.113670   19099 main.go:141] libmachine: (addons-511364) Creating domain...
	I0927 16:56:42.566950   19099 main.go:141] libmachine: (addons-511364) Waiting to get IP...
	I0927 16:56:42.567916   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:42.568367   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:42.568438   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:42.568363   19121 retry.go:31] will retry after 231.542449ms: waiting for machine to come up
	I0927 16:56:42.802153   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:42.802811   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:42.802837   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:42.802758   19121 retry.go:31] will retry after 387.770277ms: waiting for machine to come up
	I0927 16:56:43.192466   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:43.193024   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:43.193048   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:43.192976   19121 retry.go:31] will retry after 330.340209ms: waiting for machine to come up
	I0927 16:56:43.524469   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:43.524939   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:43.524968   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:43.524888   19121 retry.go:31] will retry after 590.586553ms: waiting for machine to come up
	I0927 16:56:44.116745   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:44.117265   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:44.117288   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:44.117217   19121 retry.go:31] will retry after 666.131133ms: waiting for machine to come up
	I0927 16:56:44.784785   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:44.785225   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:44.785249   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:44.785175   19121 retry.go:31] will retry after 739.182184ms: waiting for machine to come up
	I0927 16:56:45.525988   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:45.526451   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:45.526478   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:45.526402   19121 retry.go:31] will retry after 791.313986ms: waiting for machine to come up
	I0927 16:56:46.319328   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:46.319691   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:46.319727   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:46.319669   19121 retry.go:31] will retry after 1.247172922s: waiting for machine to come up
	I0927 16:56:47.568752   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:47.569274   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:47.569309   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:47.569209   19121 retry.go:31] will retry after 1.536974139s: waiting for machine to come up
	I0927 16:56:49.107958   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:49.108431   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:49.108445   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:49.108394   19121 retry.go:31] will retry after 1.495652932s: waiting for machine to come up
	I0927 16:56:50.605396   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:50.605852   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:50.605881   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:50.605797   19121 retry.go:31] will retry after 1.941999699s: waiting for machine to come up
	I0927 16:56:52.550008   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:52.550426   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:52.550445   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:52.550394   19121 retry.go:31] will retry after 3.136397261s: waiting for machine to come up
	I0927 16:56:55.688360   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:55.688777   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:55.688792   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:55.688744   19121 retry.go:31] will retry after 3.63350914s: waiting for machine to come up
	I0927 16:56:59.326727   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:56:59.327136   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find current IP address of domain addons-511364 in network mk-addons-511364
	I0927 16:56:59.327163   19099 main.go:141] libmachine: (addons-511364) DBG | I0927 16:56:59.327079   19121 retry.go:31] will retry after 5.163360631s: waiting for machine to come up
	I0927 16:57:04.495323   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.495761   19099 main.go:141] libmachine: (addons-511364) Found IP for machine: 192.168.39.239
	I0927 16:57:04.495786   19099 main.go:141] libmachine: (addons-511364) Reserving static IP address...
	I0927 16:57:04.495798   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has current primary IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.496146   19099 main.go:141] libmachine: (addons-511364) DBG | unable to find host DHCP lease matching {name: "addons-511364", mac: "52:54:00:5c:e9:5c", ip: "192.168.39.239"} in network mk-addons-511364
	I0927 16:57:04.571226   19099 main.go:141] libmachine: (addons-511364) DBG | Getting to WaitForSSH function...
	I0927 16:57:04.571257   19099 main.go:141] libmachine: (addons-511364) Reserved static IP address: 192.168.39.239
	I0927 16:57:04.571269   19099 main.go:141] libmachine: (addons-511364) Waiting for SSH to be available...
	I0927 16:57:04.574071   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.574555   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:04.574586   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.574742   19099 main.go:141] libmachine: (addons-511364) DBG | Using SSH client type: external
	I0927 16:57:04.574762   19099 main.go:141] libmachine: (addons-511364) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa (-rw-------)
	I0927 16:57:04.574846   19099 main.go:141] libmachine: (addons-511364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.239 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 16:57:04.574864   19099 main.go:141] libmachine: (addons-511364) DBG | About to run SSH command:
	I0927 16:57:04.574884   19099 main.go:141] libmachine: (addons-511364) DBG | exit 0
	I0927 16:57:04.706775   19099 main.go:141] libmachine: (addons-511364) DBG | SSH cmd err, output: <nil>: 
	I0927 16:57:04.707045   19099 main.go:141] libmachine: (addons-511364) KVM machine creation complete!
	I0927 16:57:04.707352   19099 main.go:141] libmachine: (addons-511364) Calling .GetConfigRaw
	I0927 16:57:04.707862   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:04.708086   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:04.708226   19099 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 16:57:04.708238   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:04.709705   19099 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 16:57:04.709719   19099 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 16:57:04.709737   19099 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 16:57:04.709746   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:04.712593   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.713047   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:04.713070   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.713225   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:04.713440   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:04.713686   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:04.713811   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:04.713974   19099 main.go:141] libmachine: Using SSH client type: native
	I0927 16:57:04.714224   19099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0927 16:57:04.714237   19099 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 16:57:04.818021   19099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 16:57:04.818046   19099 main.go:141] libmachine: Detecting the provisioner...
	I0927 16:57:04.818055   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:04.820954   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.821450   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:04.821486   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.821701   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:04.821889   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:04.822140   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:04.822282   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:04.822442   19099 main.go:141] libmachine: Using SSH client type: native
	I0927 16:57:04.822617   19099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0927 16:57:04.822628   19099 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 16:57:04.931371   19099 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 16:57:04.931475   19099 main.go:141] libmachine: found compatible host: buildroot
	I0927 16:57:04.931489   19099 main.go:141] libmachine: Provisioning with buildroot...
	I0927 16:57:04.931497   19099 main.go:141] libmachine: (addons-511364) Calling .GetMachineName
	I0927 16:57:04.931748   19099 buildroot.go:166] provisioning hostname "addons-511364"
	I0927 16:57:04.931778   19099 main.go:141] libmachine: (addons-511364) Calling .GetMachineName
	I0927 16:57:04.931976   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:04.934490   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.934993   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:04.935022   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:04.935170   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:04.935383   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:04.935572   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:04.935747   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:04.935978   19099 main.go:141] libmachine: Using SSH client type: native
	I0927 16:57:04.936211   19099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0927 16:57:04.936227   19099 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-511364 && echo "addons-511364" | sudo tee /etc/hostname
	I0927 16:57:05.056996   19099 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-511364
	
	I0927 16:57:05.057027   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:05.059951   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.060351   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.060380   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.060601   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:05.060794   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.060995   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.061288   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:05.061469   19099 main.go:141] libmachine: Using SSH client type: native
	I0927 16:57:05.061668   19099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0927 16:57:05.061684   19099 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-511364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-511364/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-511364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 16:57:05.176032   19099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 16:57:05.176064   19099 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 16:57:05.176099   19099 buildroot.go:174] setting up certificates
	I0927 16:57:05.176110   19099 provision.go:84] configureAuth start
	I0927 16:57:05.176119   19099 main.go:141] libmachine: (addons-511364) Calling .GetMachineName
	I0927 16:57:05.176383   19099 main.go:141] libmachine: (addons-511364) Calling .GetIP
	I0927 16:57:05.179525   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.179908   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.179940   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.180084   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:05.182485   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.182750   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.182780   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.182936   19099 provision.go:143] copyHostCerts
	I0927 16:57:05.183021   19099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 16:57:05.183147   19099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 16:57:05.183210   19099 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 16:57:05.183277   19099 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.addons-511364 san=[127.0.0.1 192.168.39.239 addons-511364 localhost minikube]
	I0927 16:57:05.438942   19099 provision.go:177] copyRemoteCerts
	I0927 16:57:05.439015   19099 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 16:57:05.439045   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:05.441941   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.442438   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.442469   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.442697   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:05.442901   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.443083   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:05.443279   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:05.524815   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 16:57:05.548093   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 16:57:05.572034   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 16:57:05.594752   19099 provision.go:87] duration metric: took 418.629654ms to configureAuth
	I0927 16:57:05.594783   19099 buildroot.go:189] setting minikube options for container-runtime
	I0927 16:57:05.594999   19099 config.go:182] Loaded profile config "addons-511364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 16:57:05.595093   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:05.597894   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.598193   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.598228   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.598352   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:05.598557   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.598707   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.598871   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:05.599024   19099 main.go:141] libmachine: Using SSH client type: native
	I0927 16:57:05.599186   19099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0927 16:57:05.599199   19099 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 16:57:05.828476   19099 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 16:57:05.828503   19099 main.go:141] libmachine: Checking connection to Docker...
	I0927 16:57:05.828513   19099 main.go:141] libmachine: (addons-511364) Calling .GetURL
	I0927 16:57:05.829936   19099 main.go:141] libmachine: (addons-511364) DBG | Using libvirt version 6000000
	I0927 16:57:05.833259   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.833704   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.833740   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.833941   19099 main.go:141] libmachine: Docker is up and running!
	I0927 16:57:05.833958   19099 main.go:141] libmachine: Reticulating splines...
	I0927 16:57:05.833983   19099 client.go:171] duration metric: took 25.607174167s to LocalClient.Create
	I0927 16:57:05.834010   19099 start.go:167] duration metric: took 25.607260809s to libmachine.API.Create "addons-511364"
	I0927 16:57:05.834022   19099 start.go:293] postStartSetup for "addons-511364" (driver="kvm2")
	I0927 16:57:05.834035   19099 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 16:57:05.834056   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:05.834289   19099 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 16:57:05.834310   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:05.836767   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.837204   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.837234   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.837407   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:05.837598   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.837765   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:05.837906   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:05.921237   19099 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 16:57:05.925222   19099 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 16:57:05.925247   19099 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 16:57:05.925324   19099 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 16:57:05.925355   19099 start.go:296] duration metric: took 91.325108ms for postStartSetup
	I0927 16:57:05.925394   19099 main.go:141] libmachine: (addons-511364) Calling .GetConfigRaw
	I0927 16:57:05.925935   19099 main.go:141] libmachine: (addons-511364) Calling .GetIP
	I0927 16:57:05.928693   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.928972   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.929001   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.929230   19099 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/config.json ...
	I0927 16:57:05.929471   19099 start.go:128] duration metric: took 25.721452105s to createHost
	I0927 16:57:05.929500   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:05.932150   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.932584   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:05.932614   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:05.932781   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:05.932970   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.933118   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:05.933279   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:05.933479   19099 main.go:141] libmachine: Using SSH client type: native
	I0927 16:57:05.933737   19099 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.239 22 <nil> <nil>}
	I0927 16:57:05.933756   19099 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 16:57:06.039335   19099 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727456226.016833290
	
	I0927 16:57:06.039370   19099 fix.go:216] guest clock: 1727456226.016833290
	I0927 16:57:06.039381   19099 fix.go:229] Guest: 2024-09-27 16:57:06.01683329 +0000 UTC Remote: 2024-09-27 16:57:05.929485164 +0000 UTC m=+25.826026864 (delta=87.348126ms)
	I0927 16:57:06.039440   19099 fix.go:200] guest clock delta is within tolerance: 87.348126ms
	I0927 16:57:06.039447   19099 start.go:83] releasing machines lock for "addons-511364", held for 25.83150201s
	I0927 16:57:06.039484   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:06.039800   19099 main.go:141] libmachine: (addons-511364) Calling .GetIP
	I0927 16:57:06.042147   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:06.042581   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:06.042607   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:06.042773   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:06.043224   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:06.043464   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:06.043596   19099 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 16:57:06.043643   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:06.043701   19099 ssh_runner.go:195] Run: cat /version.json
	I0927 16:57:06.043725   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:06.046338   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:06.046388   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:06.046677   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:06.046704   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:06.046771   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:06.046805   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:06.046856   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:06.047033   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:06.047045   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:06.047170   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:06.047187   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:06.047323   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:06.047321   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:06.047433   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:06.174755   19099 ssh_runner.go:195] Run: systemctl --version
	I0927 16:57:06.180787   19099 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 16:57:06.345434   19099 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 16:57:06.350829   19099 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 16:57:06.350904   19099 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 16:57:06.366289   19099 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 16:57:06.366314   19099 start.go:495] detecting cgroup driver to use...
	I0927 16:57:06.366383   19099 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 16:57:06.384036   19099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 16:57:06.399056   19099 docker.go:217] disabling cri-docker service (if available) ...
	I0927 16:57:06.399131   19099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 16:57:06.413174   19099 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 16:57:06.427045   19099 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 16:57:06.544622   19099 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 16:57:06.705589   19099 docker.go:233] disabling docker service ...
	I0927 16:57:06.705649   19099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 16:57:06.719460   19099 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 16:57:06.732085   19099 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 16:57:06.841544   19099 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 16:57:06.953603   19099 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 16:57:06.967779   19099 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 16:57:06.985817   19099 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 16:57:06.985874   19099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:06.995845   19099 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 16:57:06.995915   19099 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:07.005687   19099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:07.016235   19099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:07.026403   19099 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 16:57:07.036846   19099 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:07.047090   19099 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:07.063493   19099 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 16:57:07.073608   19099 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 16:57:07.082829   19099 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 16:57:07.082881   19099 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 16:57:07.095470   19099 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 16:57:07.104865   19099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:57:07.222691   19099 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 16:57:07.316381   19099 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 16:57:07.316467   19099 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 16:57:07.321288   19099 start.go:563] Will wait 60s for crictl version
	I0927 16:57:07.321368   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:57:07.325061   19099 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 16:57:07.363195   19099 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 16:57:07.363333   19099 ssh_runner.go:195] Run: crio --version
	I0927 16:57:07.391202   19099 ssh_runner.go:195] Run: crio --version
	I0927 16:57:07.418971   19099 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 16:57:07.420662   19099 main.go:141] libmachine: (addons-511364) Calling .GetIP
	I0927 16:57:07.423526   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:07.423920   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:07.423949   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:07.424161   19099 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 16:57:07.428067   19099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 16:57:07.439663   19099 kubeadm.go:883] updating cluster {Name:addons-511364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-511364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 16:57:07.439760   19099 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 16:57:07.439805   19099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 16:57:07.470408   19099 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 16:57:07.470476   19099 ssh_runner.go:195] Run: which lz4
	I0927 16:57:07.474296   19099 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 16:57:07.478306   19099 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 16:57:07.478337   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 16:57:08.620624   19099 crio.go:462] duration metric: took 1.146362431s to copy over tarball
	I0927 16:57:08.620689   19099 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 16:57:10.721445   19099 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.100730155s)
	I0927 16:57:10.721478   19099 crio.go:469] duration metric: took 2.100826662s to extract the tarball
	I0927 16:57:10.721487   19099 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 16:57:10.757950   19099 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 16:57:10.799342   19099 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 16:57:10.799367   19099 cache_images.go:84] Images are preloaded, skipping loading
	I0927 16:57:10.799376   19099 kubeadm.go:934] updating node { 192.168.39.239 8443 v1.31.1 crio true true} ...
	I0927 16:57:10.799469   19099 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-511364 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.239
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-511364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 16:57:10.799574   19099 ssh_runner.go:195] Run: crio config
	I0927 16:57:10.844279   19099 cni.go:84] Creating CNI manager for ""
	I0927 16:57:10.844306   19099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 16:57:10.844317   19099 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 16:57:10.844343   19099 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.239 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-511364 NodeName:addons-511364 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 16:57:10.844494   19099 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-511364"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 16:57:10.844550   19099 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 16:57:10.853976   19099 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 16:57:10.854045   19099 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 16:57:10.863414   19099 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 16:57:10.880832   19099 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 16:57:10.897799   19099 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0927 16:57:10.914561   19099 ssh_runner.go:195] Run: grep 192.168.39.239	control-plane.minikube.internal$ /etc/hosts
	I0927 16:57:10.918424   19099 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.239	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 16:57:10.930338   19099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:57:11.044216   19099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 16:57:11.061051   19099 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364 for IP: 192.168.39.239
	I0927 16:57:11.061078   19099 certs.go:194] generating shared ca certs ...
	I0927 16:57:11.061099   19099 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.061299   19099 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 16:57:11.176878   19099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt ...
	I0927 16:57:11.176909   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt: {Name:mk178ecf14a51c29370cc9dcde6604138ca2346e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.177078   19099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key ...
	I0927 16:57:11.177091   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key: {Name:mk073c59a2ccfde0a590a09a836e9e6dd4f04544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.177162   19099 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 16:57:11.240152   19099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt ...
	I0927 16:57:11.240181   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt: {Name:mk2f5bf58500f31dbcd366df4cd0af7174abad88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.240336   19099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key ...
	I0927 16:57:11.240346   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key: {Name:mk6e0898c060ad7f2d917ee94685a644c67ddc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.240409   19099 certs.go:256] generating profile certs ...
	I0927 16:57:11.240483   19099 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/client.key
	I0927 16:57:11.240506   19099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/client.crt with IP's: []
	I0927 16:57:11.465082   19099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/client.crt ...
	I0927 16:57:11.465119   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/client.crt: {Name:mk5fb2b785eb978e506500e3b03ce61d514b8297 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.465293   19099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/client.key ...
	I0927 16:57:11.465306   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/client.key: {Name:mk298832062ed1a62fd7d8201b1e07d0455a7b5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.465390   19099 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.key.c4e318e6
	I0927 16:57:11.465407   19099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.crt.c4e318e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.239]
	I0927 16:57:11.551955   19099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.crt.c4e318e6 ...
	I0927 16:57:11.551993   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.crt.c4e318e6: {Name:mka062f8ebaf5138d5af3189f1afe78376ffb9a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.552148   19099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.key.c4e318e6 ...
	I0927 16:57:11.552161   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.key.c4e318e6: {Name:mk34bd879048b51d04eff6b75ef173a58415501a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.552229   19099 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.crt.c4e318e6 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.crt
	I0927 16:57:11.552297   19099 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.key.c4e318e6 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.key
	I0927 16:57:11.552344   19099 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.key
	I0927 16:57:11.552360   19099 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.crt with IP's: []
	I0927 16:57:11.767989   19099 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.crt ...
	I0927 16:57:11.768023   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.crt: {Name:mkaac7fae230b8a425eddcee274e71d25421a2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.768179   19099 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.key ...
	I0927 16:57:11.768190   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.key: {Name:mk1c4177d24c2cf88e17b53c2e038c088df8f552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:11.768345   19099 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 16:57:11.768383   19099 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 16:57:11.768408   19099 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 16:57:11.768427   19099 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 16:57:11.768965   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 16:57:11.794278   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 16:57:11.816357   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 16:57:11.839292   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 16:57:11.864443   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 16:57:11.887569   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 16:57:11.911437   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 16:57:11.935770   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/addons-511364/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 16:57:11.959413   19099 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 16:57:11.983129   19099 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 16:57:11.999091   19099 ssh_runner.go:195] Run: openssl version
	I0927 16:57:12.004834   19099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 16:57:12.015370   19099 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 16:57:12.019945   19099 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 16:57:12.020015   19099 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 16:57:12.025637   19099 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 16:57:12.035915   19099 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 16:57:12.039946   19099 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 16:57:12.040000   19099 kubeadm.go:392] StartCluster: {Name:addons-511364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-511364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:57:12.040084   19099 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 16:57:12.040128   19099 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 16:57:12.074035   19099 cri.go:89] found id: ""
	I0927 16:57:12.074117   19099 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 16:57:12.083817   19099 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 16:57:12.093448   19099 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 16:57:12.103108   19099 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 16:57:12.103128   19099 kubeadm.go:157] found existing configuration files:
	
	I0927 16:57:12.103167   19099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 16:57:12.112210   19099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 16:57:12.112278   19099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 16:57:12.122713   19099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 16:57:12.132480   19099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 16:57:12.132535   19099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 16:57:12.141701   19099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 16:57:12.150690   19099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 16:57:12.150758   19099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 16:57:12.160235   19099 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 16:57:12.169369   19099 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 16:57:12.169425   19099 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 16:57:12.178966   19099 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 16:57:12.230341   19099 kubeadm.go:310] W0927 16:57:12.213961     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 16:57:12.231152   19099 kubeadm.go:310] W0927 16:57:12.215001     815 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 16:57:12.334480   19099 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 16:57:23.039537   19099 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 16:57:23.039614   19099 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 16:57:23.039740   19099 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 16:57:23.039869   19099 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 16:57:23.039992   19099 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 16:57:23.040087   19099 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 16:57:23.041653   19099 out.go:235]   - Generating certificates and keys ...
	I0927 16:57:23.041740   19099 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 16:57:23.041822   19099 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 16:57:23.041931   19099 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 16:57:23.042018   19099 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 16:57:23.042102   19099 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 16:57:23.042169   19099 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 16:57:23.042242   19099 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 16:57:23.042444   19099 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-511364 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0927 16:57:23.042509   19099 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 16:57:23.042624   19099 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-511364 localhost] and IPs [192.168.39.239 127.0.0.1 ::1]
	I0927 16:57:23.042708   19099 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 16:57:23.042779   19099 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 16:57:23.042833   19099 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 16:57:23.042889   19099 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 16:57:23.042937   19099 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 16:57:23.043001   19099 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 16:57:23.043071   19099 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 16:57:23.043162   19099 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 16:57:23.043258   19099 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 16:57:23.043348   19099 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 16:57:23.043432   19099 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 16:57:23.045090   19099 out.go:235]   - Booting up control plane ...
	I0927 16:57:23.045235   19099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 16:57:23.045346   19099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 16:57:23.045411   19099 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 16:57:23.045515   19099 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 16:57:23.045596   19099 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 16:57:23.045631   19099 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 16:57:23.045739   19099 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 16:57:23.045830   19099 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 16:57:23.045888   19099 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.200729ms
	I0927 16:57:23.045953   19099 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 16:57:23.046007   19099 kubeadm.go:310] [api-check] The API server is healthy after 6.002617738s
	I0927 16:57:23.046095   19099 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 16:57:23.046210   19099 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 16:57:23.046267   19099 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 16:57:23.046484   19099 kubeadm.go:310] [mark-control-plane] Marking the node addons-511364 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 16:57:23.046537   19099 kubeadm.go:310] [bootstrap-token] Using token: 0jzdut.25ae1ldk8wkfzrio
	I0927 16:57:23.047666   19099 out.go:235]   - Configuring RBAC rules ...
	I0927 16:57:23.047751   19099 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 16:57:23.047839   19099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 16:57:23.048053   19099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 16:57:23.048231   19099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 16:57:23.048406   19099 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 16:57:23.048536   19099 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 16:57:23.048696   19099 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 16:57:23.048750   19099 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 16:57:23.048792   19099 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 16:57:23.048798   19099 kubeadm.go:310] 
	I0927 16:57:23.048847   19099 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 16:57:23.048853   19099 kubeadm.go:310] 
	I0927 16:57:23.048922   19099 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 16:57:23.048928   19099 kubeadm.go:310] 
	I0927 16:57:23.048963   19099 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 16:57:23.049052   19099 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 16:57:23.049125   19099 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 16:57:23.049141   19099 kubeadm.go:310] 
	I0927 16:57:23.049201   19099 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 16:57:23.049209   19099 kubeadm.go:310] 
	I0927 16:57:23.049253   19099 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 16:57:23.049258   19099 kubeadm.go:310] 
	I0927 16:57:23.049320   19099 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 16:57:23.049419   19099 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 16:57:23.049506   19099 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 16:57:23.049517   19099 kubeadm.go:310] 
	I0927 16:57:23.049603   19099 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 16:57:23.049678   19099 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 16:57:23.049684   19099 kubeadm.go:310] 
	I0927 16:57:23.049757   19099 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0jzdut.25ae1ldk8wkfzrio \
	I0927 16:57:23.049849   19099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 \
	I0927 16:57:23.049868   19099 kubeadm.go:310] 	--control-plane 
	I0927 16:57:23.049874   19099 kubeadm.go:310] 
	I0927 16:57:23.049946   19099 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 16:57:23.049954   19099 kubeadm.go:310] 
	I0927 16:57:23.050022   19099 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0jzdut.25ae1ldk8wkfzrio \
	I0927 16:57:23.050123   19099 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 
	I0927 16:57:23.050132   19099 cni.go:84] Creating CNI manager for ""
	I0927 16:57:23.050139   19099 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 16:57:23.051576   19099 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 16:57:23.052753   19099 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 16:57:23.064005   19099 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 16:57:23.083635   19099 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 16:57:23.083744   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:23.083766   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-511364 minikube.k8s.io/updated_at=2024_09_27T16_57_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=addons-511364 minikube.k8s.io/primary=true
	I0927 16:57:23.105509   19099 ops.go:34] apiserver oom_adj: -16
	I0927 16:57:23.178163   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:23.678515   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:24.179053   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:24.679323   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:25.178780   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:25.678965   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:26.178655   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:26.678425   19099 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 16:57:26.748884   19099 kubeadm.go:1113] duration metric: took 3.66521333s to wait for elevateKubeSystemPrivileges
	I0927 16:57:26.748988   19099 kubeadm.go:394] duration metric: took 14.708986713s to StartCluster
	I0927 16:57:26.749018   19099 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:26.749156   19099 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 16:57:26.749530   19099 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 16:57:26.749727   19099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 16:57:26.749754   19099 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 16:57:26.749793   19099 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 16:57:26.749886   19099 addons.go:69] Setting yakd=true in profile "addons-511364"
	I0927 16:57:26.749891   19099 addons.go:69] Setting ingress=true in profile "addons-511364"
	I0927 16:57:26.749917   19099 addons.go:234] Setting addon yakd=true in "addons-511364"
	I0927 16:57:26.749908   19099 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-511364"
	I0927 16:57:26.749931   19099 addons.go:234] Setting addon ingress=true in "addons-511364"
	I0927 16:57:26.749946   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.749963   19099 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-511364"
	I0927 16:57:26.749968   19099 addons.go:69] Setting metrics-server=true in profile "addons-511364"
	I0927 16:57:26.749982   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.749994   19099 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-511364"
	I0927 16:57:26.749996   19099 addons.go:69] Setting default-storageclass=true in profile "addons-511364"
	I0927 16:57:26.749996   19099 config.go:182] Loaded profile config "addons-511364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 16:57:26.750006   19099 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-511364"
	I0927 16:57:26.750023   19099 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-511364"
	I0927 16:57:26.750013   19099 addons.go:69] Setting cloud-spanner=true in profile "addons-511364"
	I0927 16:57:26.750040   19099 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-511364"
	I0927 16:57:26.750040   19099 addons.go:69] Setting storage-provisioner=true in profile "addons-511364"
	I0927 16:57:26.750054   19099 addons.go:69] Setting volcano=true in profile "addons-511364"
	I0927 16:57:26.750061   19099 addons.go:69] Setting gcp-auth=true in profile "addons-511364"
	I0927 16:57:26.750063   19099 addons.go:234] Setting addon cloud-spanner=true in "addons-511364"
	I0927 16:57:26.750068   19099 addons.go:234] Setting addon volcano=true in "addons-511364"
	I0927 16:57:26.750078   19099 mustload.go:65] Loading cluster: addons-511364
	I0927 16:57:26.750087   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.750094   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.750227   19099 config.go:182] Loaded profile config "addons-511364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 16:57:26.750417   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.750412   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.750437   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.749963   19099 addons.go:69] Setting inspektor-gadget=true in profile "addons-511364"
	I0927 16:57:26.750448   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.750447   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.750454   19099 addons.go:234] Setting addon inspektor-gadget=true in "addons-511364"
	I0927 16:57:26.750471   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.750473   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.750480   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.750511   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.750556   19099 addons.go:69] Setting volumesnapshots=true in profile "addons-511364"
	I0927 16:57:26.750578   19099 addons.go:234] Setting addon volumesnapshots=true in "addons-511364"
	I0927 16:57:26.750608   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.750619   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.750658   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.750842   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.750884   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.750055   19099 addons.go:234] Setting addon storage-provisioner=true in "addons-511364"
	I0927 16:57:26.750998   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.750009   19099 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-511364"
	I0927 16:57:26.750434   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.751261   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.750042   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.749989   19099 addons.go:234] Setting addon metrics-server=true in "addons-511364"
	I0927 16:57:26.749925   19099 addons.go:69] Setting registry=true in profile "addons-511364"
	I0927 16:57:26.750963   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.751297   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.751308   19099 addons.go:234] Setting addon registry=true in "addons-511364"
	I0927 16:57:26.749988   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.751451   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.751630   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.751685   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.751701   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.751726   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.751726   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.749939   19099 addons.go:69] Setting ingress-dns=true in profile "addons-511364"
	I0927 16:57:26.751810   19099 addons.go:234] Setting addon ingress-dns=true in "addons-511364"
	I0927 16:57:26.751845   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.751857   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.751929   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.752076   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.752106   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.752222   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.752246   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.751643   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.752319   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.758696   19099 out.go:177] * Verifying Kubernetes components...
	I0927 16:57:26.763678   19099 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 16:57:26.772121   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0927 16:57:26.772460   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0927 16:57:26.772834   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.772960   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.773216   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I0927 16:57:26.773554   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.773582   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.773884   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.773944   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.773964   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.774343   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.774417   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40095
	I0927 16:57:26.774434   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.774497   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.774573   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.774742   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.775406   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.775447   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.776841   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.776860   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.776928   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.777452   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.777502   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I0927 16:57:26.782990   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.783040   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.783110   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.783157   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.783319   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46695
	I0927 16:57:26.783638   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.783683   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.783683   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.783722   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.783734   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.783850   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0927 16:57:26.783994   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0927 16:57:26.784937   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.785048   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.785119   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.785134   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.785158   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.785809   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.785828   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.786035   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.786049   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.786176   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.786188   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.786244   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.786292   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.786751   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.786764   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.786830   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.786940   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.787177   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.787203   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.787351   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.787382   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.790482   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.790907   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.790950   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.810279   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45931
	I0927 16:57:26.810995   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.811526   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.811545   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.811972   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.812178   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.813940   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.816156   19099 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 16:57:26.817358   19099 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 16:57:26.817384   19099 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 16:57:26.817407   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.817629   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41761
	I0927 16:57:26.818116   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.818662   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.818679   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.819072   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.819247   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.820301   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0927 16:57:26.820957   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.821511   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.821526   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.821838   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.821891   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.822193   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.822265   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:26.822273   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:26.822524   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.822728   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.822763   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.822977   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:26.822990   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:26.822999   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:26.823001   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.823007   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:26.823013   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:26.823297   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.823308   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.823525   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.823645   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.823920   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37141
	I0927 16:57:26.824021   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I0927 16:57:26.824310   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:26.824318   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	W0927 16:57:26.824401   19099 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0927 16:57:26.824569   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.824951   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.824963   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.825006   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0927 16:57:26.825547   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.825960   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.826014   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.826030   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.826033   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.826397   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.826440   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.826920   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.826935   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.827338   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42007
	I0927 16:57:26.827549   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.827817   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.827847   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.827848   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.828134   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.829614   19099 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-511364"
	I0927 16:57:26.829656   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.829780   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.830052   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.830084   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.831025   19099 addons.go:234] Setting addon default-storageclass=true in "addons-511364"
	I0927 16:57:26.831047   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:26.831273   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.831294   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.831529   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.831542   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.832062   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.832117   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37079
	I0927 16:57:26.832711   19099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0927 16:57:26.832989   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.833068   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.833106   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.833795   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.833815   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.833870   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43139
	I0927 16:57:26.834303   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.834367   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.834574   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.835088   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.835109   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.835487   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.835651   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.835703   19099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 16:57:26.836934   19099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 16:57:26.837311   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.838456   19099 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 16:57:26.838474   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0927 16:57:26.838573   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.839204   19099 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 16:57:26.839462   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40689
	I0927 16:57:26.839961   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.840500   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.840516   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.840672   19099 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 16:57:26.840693   19099 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 16:57:26.840716   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.840959   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.841571   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.841615   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.843197   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.843749   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.844087   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.844104   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.844318   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.844498   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.844654   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.844670   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.844687   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.844808   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.844810   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.844990   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.845175   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.845345   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.853150   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35437
	I0927 16:57:26.854087   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.854574   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.854592   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.855026   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.855217   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.855300   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0927 16:57:26.855437   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41389
	I0927 16:57:26.855929   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.856649   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.857172   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.857239   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.857264   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.857272   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.857294   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.857707   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.857711   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.857745   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45451
	I0927 16:57:26.858303   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.858336   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.858590   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.858862   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.859287   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.859311   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.859804   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 16:57:26.860254   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.860911   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.860943   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.861052   19099 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 16:57:26.861069   19099 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 16:57:26.861095   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.861739   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.863226   19099 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 16:57:26.864385   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.864650   19099 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 16:57:26.864666   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 16:57:26.864684   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.864791   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.864815   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.865057   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.865119   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I0927 16:57:26.865410   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.865606   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.865771   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.866559   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.867258   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.867283   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.867691   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.867755   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.868360   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.868400   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.868645   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.868664   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.868679   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.868854   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.868983   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.869104   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.872071   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I0927 16:57:26.872285   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0927 16:57:26.872689   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.872775   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.873248   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.873269   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.873596   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.873614   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.873671   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.874270   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:26.874296   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:26.874588   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.875136   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.877048   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.879280   19099 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 16:57:26.880575   19099 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 16:57:26.880595   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 16:57:26.880617   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.884360   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.885032   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.885054   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.885280   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.885522   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.885732   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.885979   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.889691   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I0927 16:57:26.890061   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0927 16:57:26.890204   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43287
	I0927 16:57:26.890676   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.891769   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.891789   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.891873   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.892002   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0927 16:57:26.892159   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.892209   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0927 16:57:26.892366   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.892411   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.892517   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.892816   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.892832   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.892960   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.892969   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.893504   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.893627   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.893651   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.893789   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.894005   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.894009   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.894164   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.894209   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.894558   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.894572   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.895534   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.895602   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.895754   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.895812   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.896522   19099 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 16:57:26.896685   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.896710   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.897602   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.897843   19099 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 16:57:26.897858   19099 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 16:57:26.897885   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.898044   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.899284   19099 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 16:57:26.899418   19099 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0927 16:57:26.899607   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 16:57:26.899971   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38289
	I0927 16:57:26.900813   19099 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 16:57:26.900829   19099 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 16:57:26.900681   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.900847   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.901364   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.901382   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.901784   19099 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0927 16:57:26.901862   19099 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 16:57:26.901878   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0927 16:57:26.901893   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.902128   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.902320   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.903141   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 16:57:26.903242   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.904462   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.904465   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.904497   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.904678   19099 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 16:57:26.904690   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 16:57:26.904707   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.904967   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.904967   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.905344   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.905953   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.905988   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.906321   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 16:57:26.906340   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.906693   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.906785   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.906844   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.907000   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.907061   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.907072   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.907168   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.907221   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.907390   19099 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 16:57:26.907419   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.907449   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.907480   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34619
	I0927 16:57:26.908263   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.908369   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.908635   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:26.909249   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 16:57:26.909256   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:26.909270   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:26.909397   19099 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 16:57:26.909406   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 16:57:26.909417   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.909485   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.909786   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.909802   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.909837   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:26.910003   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.910069   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:26.910231   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.910355   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.910593   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.911850   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:26.913074   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.913078   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 16:57:26.913437   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.913459   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.913660   19099 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 16:57:26.913717   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.914120   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.914310   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.914417   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.915658   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 16:57:26.916463   19099 out.go:177]   - Using image docker.io/busybox:stable
	I0927 16:57:26.917959   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 16:57:26.918033   19099 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 16:57:26.918046   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 16:57:26.918059   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.919867   19099 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 16:57:26.920920   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 16:57:26.920940   19099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 16:57:26.920961   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:26.921739   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.922171   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.922202   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.922360   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.922570   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.922871   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.922989   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:26.924496   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.925095   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:26.925142   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:26.925304   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:26.925491   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:26.925666   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:26.925792   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	W0927 16:57:26.929725   19099 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40036->192.168.39.239:22: read: connection reset by peer
	I0927 16:57:26.929753   19099 retry.go:31] will retry after 206.539107ms: ssh: handshake failed: read tcp 192.168.39.1:40036->192.168.39.239:22: read: connection reset by peer
	I0927 16:57:27.161762   19099 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 16:57:27.161791   19099 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 16:57:27.202581   19099 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 16:57:27.202607   19099 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 16:57:27.299562   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0927 16:57:27.300724   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 16:57:27.316613   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 16:57:27.337056   19099 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 16:57:27.337080   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 16:57:27.338089   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 16:57:27.355792   19099 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 16:57:27.355822   19099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 16:57:27.356728   19099 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 16:57:27.356743   19099 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 16:57:27.365953   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 16:57:27.379992   19099 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 16:57:27.380015   19099 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 16:57:27.389771   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0927 16:57:27.405277   19099 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 16:57:27.405302   19099 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 16:57:27.436025   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 16:57:27.486393   19099 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 16:57:27.486418   19099 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 16:57:27.581611   19099 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 16:57:27.584763   19099 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 16:57:27.629095   19099 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 16:57:27.629126   19099 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 16:57:27.631703   19099 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 16:57:27.631723   19099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 16:57:27.634771   19099 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 16:57:27.634788   19099 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 16:57:27.679912   19099 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 16:57:27.679939   19099 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 16:57:27.692910   19099 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 16:57:27.692940   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 16:57:27.702457   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 16:57:27.702486   19099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 16:57:27.793478   19099 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 16:57:27.793503   19099 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 16:57:27.842416   19099 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 16:57:27.842441   19099 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 16:57:27.872744   19099 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0927 16:57:27.872770   19099 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0927 16:57:27.872959   19099 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 16:57:27.872990   19099 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 16:57:27.882379   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 16:57:27.947647   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 16:57:27.947675   19099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 16:57:28.021744   19099 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 16:57:28.021850   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 16:57:28.062291   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 16:57:28.062321   19099 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 16:57:28.076022   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 16:57:28.111942   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 16:57:28.111973   19099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 16:57:28.122933   19099 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 16:57:28.122962   19099 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 16:57:28.304201   19099 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 16:57:28.304232   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 16:57:28.332941   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 16:57:28.350241   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 16:57:28.350271   19099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 16:57:28.488870   19099 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 16:57:28.488903   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0927 16:57:28.491560   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 16:57:28.563488   19099 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 16:57:28.563519   19099 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 16:57:28.762114   19099 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 16:57:28.762150   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 16:57:28.770548   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 16:57:29.095900   19099 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 16:57:29.095930   19099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 16:57:29.340886   19099 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 16:57:29.340918   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 16:57:29.727869   19099 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 16:57:29.727892   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 16:57:29.839392   19099 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 16:57:29.839426   19099 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 16:57:30.065707   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 16:57:33.868827   19099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 16:57:33.868865   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:33.872255   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:33.872692   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:33.872713   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:33.872942   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:33.873165   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:33.873334   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:33.873509   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:34.128253   19099 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 16:57:34.223050   19099 addons.go:234] Setting addon gcp-auth=true in "addons-511364"
	I0927 16:57:34.223114   19099 host.go:66] Checking if "addons-511364" exists ...
	I0927 16:57:34.223557   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:34.223613   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:34.240930   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0927 16:57:34.241486   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:34.242000   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:34.242028   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:34.242360   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:34.242939   19099 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 16:57:34.242983   19099 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 16:57:34.258763   19099 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I0927 16:57:34.259321   19099 main.go:141] libmachine: () Calling .GetVersion
	I0927 16:57:34.259957   19099 main.go:141] libmachine: Using API Version  1
	I0927 16:57:34.259999   19099 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 16:57:34.260327   19099 main.go:141] libmachine: () Calling .GetMachineName
	I0927 16:57:34.260544   19099 main.go:141] libmachine: (addons-511364) Calling .GetState
	I0927 16:57:34.262058   19099 main.go:141] libmachine: (addons-511364) Calling .DriverName
	I0927 16:57:34.262279   19099 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 16:57:34.262308   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHHostname
	I0927 16:57:34.265686   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:34.266244   19099 main.go:141] libmachine: (addons-511364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:e9:5c", ip: ""} in network mk-addons-511364: {Iface:virbr1 ExpiryTime:2024-09-27 17:56:55 +0000 UTC Type:0 Mac:52:54:00:5c:e9:5c Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:addons-511364 Clientid:01:52:54:00:5c:e9:5c}
	I0927 16:57:34.266275   19099 main.go:141] libmachine: (addons-511364) DBG | domain addons-511364 has defined IP address 192.168.39.239 and MAC address 52:54:00:5c:e9:5c in network mk-addons-511364
	I0927 16:57:34.266486   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHPort
	I0927 16:57:34.266714   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHKeyPath
	I0927 16:57:34.266891   19099 main.go:141] libmachine: (addons-511364) Calling .GetSSHUsername
	I0927 16:57:34.267111   19099 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/addons-511364/id_rsa Username:docker}
	I0927 16:57:35.096903   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.797301932s)
	I0927 16:57:35.096971   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.096985   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097016   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.796265592s)
	I0927 16:57:35.097054   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.097068   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097090   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.780448818s)
	I0927 16:57:35.097116   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.097129   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097178   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.759069002s)
	I0927 16:57:35.097230   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.097246   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.707447217s)
	I0927 16:57:35.097276   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.097286   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097251   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097202   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.731226728s)
	I0927 16:57:35.097351   19099 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.515701654s)
	I0927 16:57:35.097363   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.097371   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097322   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.661238967s)
	I0927 16:57:35.097411   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.097420   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.097478   19099 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.512687862s)
	I0927 16:57:35.097492   19099 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 16:57:35.098398   19099 node_ready.go:35] waiting up to 6m0s for node "addons-511364" to be "Ready" ...
	I0927 16:57:35.098539   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.216125959s)
	I0927 16:57:35.098566   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.098578   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.098690   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.022632715s)
	I0927 16:57:35.098714   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.098725   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099247   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.766265102s)
	I0927 16:57:35.099280   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099293   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099366   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.607693087s)
	W0927 16:57:35.099398   19099 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 16:57:35.099420   19099 retry.go:31] will retry after 134.406059ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 16:57:35.099502   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099507   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.328926045s)
	I0927 16:57:35.099528   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099529   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099534   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099538   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099545   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099559   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099562   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099567   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099569   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099577   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099584   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099619   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099625   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099633   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099638   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099660   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099669   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099675   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099679   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099686   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099693   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099699   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099705   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099711   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099745   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099759   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099778   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099784   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099791   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099797   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099818   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099866   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099874   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099884   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099891   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.099941   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.099960   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.099970   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.099978   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.099984   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.100402   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.100434   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.100441   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.100449   19099 addons.go:475] Verifying addon registry=true in "addons-511364"
	I0927 16:57:35.100690   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.100699   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.100706   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.100711   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.100810   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.100821   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.100983   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.101006   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.101013   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.101333   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.101409   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.101418   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.101613   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.101637   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.101643   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.101826   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.101850   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.101857   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.101864   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.101870   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.102296   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.102313   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.102322   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.102325   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.102348   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.102359   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.102407   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.102417   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.102451   19099 out.go:177] * Verifying registry addon...
	I0927 16:57:35.102368   19099 addons.go:475] Verifying addon ingress=true in "addons-511364"
	I0927 16:57:35.104090   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.104103   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.104132   19099 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-511364 service yakd-dashboard -n yakd-dashboard
	
	I0927 16:57:35.104978   19099 out.go:177] * Verifying ingress addon...
	I0927 16:57:35.105998   19099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 16:57:35.106063   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.106102   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.106118   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.107443   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.107467   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.107481   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.107488   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.107696   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:35.108068   19099 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0927 16:57:35.108193   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.108207   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.108221   19099 addons.go:475] Verifying addon metrics-server=true in "addons-511364"
	I0927 16:57:35.120190   19099 node_ready.go:49] node "addons-511364" has status "Ready":"True"
	I0927 16:57:35.120214   19099 node_ready.go:38] duration metric: took 21.798841ms for node "addons-511364" to be "Ready" ...
	I0927 16:57:35.120223   19099 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 16:57:35.122253   19099 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0927 16:57:35.122272   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:35.122302   19099 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 16:57:35.122324   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:35.157089   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.157111   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.157374   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.157416   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.157423   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	W0927 16:57:35.157555   19099 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0927 16:57:35.170524   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:35.170557   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:35.170839   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:35.170887   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:35.231831   19099 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:35.234318   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 16:57:35.606526   19099 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-511364" context rescaled to 1 replicas
	I0927 16:57:35.615418   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:35.615704   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:36.295720   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:36.295847   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:36.321006   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.255239119s)
	I0927 16:57:36.321050   19099 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.058749779s)
	I0927 16:57:36.321053   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:36.321213   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:36.321451   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:36.321499   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:36.321511   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:36.321525   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:36.321533   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:36.321751   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:36.321798   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:36.321813   19099 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-511364"
	I0927 16:57:36.321775   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:36.323724   19099 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 16:57:36.323730   19099 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 16:57:36.325142   19099 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 16:57:36.325890   19099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 16:57:36.326663   19099 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 16:57:36.326678   19099 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 16:57:36.338434   19099 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 16:57:36.338466   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:36.462074   19099 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 16:57:36.462101   19099 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 16:57:36.532140   19099 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 16:57:36.532164   19099 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 16:57:36.558230   19099 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 16:57:36.610031   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:36.612712   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:36.831079   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:37.018374   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.784012431s)
	I0927 16:57:37.018429   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:37.018440   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:37.018765   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:37.018836   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:37.018854   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:37.018869   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:37.018881   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:37.019083   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:37.019099   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:37.019118   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:37.113385   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:37.113458   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:37.237560   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:37.330048   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:37.613301   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:37.726638   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:37.858582   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:37.909944   19099 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.351678746s)
	I0927 16:57:37.910001   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:37.910019   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:37.910319   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:37.910335   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:37.910343   19099 main.go:141] libmachine: Making call to close driver server
	I0927 16:57:37.910365   19099 main.go:141] libmachine: (addons-511364) Calling .Close
	I0927 16:57:37.910606   19099 main.go:141] libmachine: Successfully made call to close driver server
	I0927 16:57:37.910623   19099 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 16:57:37.910634   19099 main.go:141] libmachine: (addons-511364) DBG | Closing plugin on server side
	I0927 16:57:37.912632   19099 addons.go:475] Verifying addon gcp-auth=true in "addons-511364"
	I0927 16:57:37.914439   19099 out.go:177] * Verifying gcp-auth addon...
	I0927 16:57:37.916390   19099 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 16:57:38.001859   19099 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 16:57:38.001880   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:38.166872   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:38.167500   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:38.330736   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:38.421794   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:38.613610   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:38.614015   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:38.831325   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:38.930959   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:39.110255   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:39.112673   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:39.237661   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:39.331246   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:39.419332   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:39.610322   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:39.612041   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:39.831235   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:39.922856   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:40.109372   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:40.111708   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:40.331724   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:40.431145   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:40.610118   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:40.612316   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:40.830613   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:40.919767   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:41.109818   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:41.111803   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:41.237907   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:41.330612   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:41.420917   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:41.612596   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:41.618912   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:41.830352   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:41.920692   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:42.110024   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:42.113994   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:42.330589   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:42.420567   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:42.609939   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:42.614062   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:42.738118   19099 pod_ready.go:98] pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:42 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.239 HostIPs:[{IP:192.168.39
.239}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-27 16:57:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 16:57:31 +0000 UTC,FinishedAt:2024-09-27 16:57:41 +0000 UTC,ContainerID:cri-o://226815e5d4a6ef040cbd2a8e47df752276a3e90179c501fc94245d93605b6cc0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://226815e5d4a6ef040cbd2a8e47df752276a3e90179c501fc94245d93605b6cc0 Started:0xc002216f00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0023d6400} {Name:kube-api-access-jrgz8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0023d6410}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 16:57:42.738152   19099 pod_ready.go:82] duration metric: took 7.506292469s for pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace to be "Ready" ...
	E0927 16:57:42.738168   19099 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-8kvgj" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:42 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-27 16:57:27 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.239 HostIPs:[{IP:192.168.39.239}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-27 16:57:27 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-27 16:57:31 +0000 UTC,FinishedAt:2024-09-27 16:57:41 +0000 UTC,ContainerID:cri-o://226815e5d4a6ef040cbd2a8e47df752276a3e90179c501fc94245d93605b6cc0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://226815e5d4a6ef040cbd2a8e47df752276a3e90179c501fc94245d93605b6cc0 Started:0xc002216f00 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0023d6400} {Name:kube-api-access-jrgz8 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0023d6410}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0927 16:57:42.738180   19099 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace to be "Ready" ...
	I0927 16:57:42.831228   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:42.919829   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:43.113232   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:43.113481   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:43.329984   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:43.420972   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:43.611935   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:43.612304   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:43.832344   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:43.919395   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:44.110055   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:44.112114   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:44.330635   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:44.420789   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:44.610109   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:44.612157   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:44.744578   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:44.831167   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:44.921173   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:45.109457   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:45.111873   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:45.330547   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:45.420702   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:45.611412   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:45.613543   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:45.832783   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:45.919589   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:46.109858   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:46.112915   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:46.331330   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:46.420640   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:46.610117   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:46.612564   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:46.745272   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:46.832707   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:46.921335   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:47.109569   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:47.112007   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:47.331052   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:47.419511   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:47.609627   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:47.611786   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:47.830017   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:47.920217   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:48.109412   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:48.111289   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:48.330700   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:48.420214   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:48.611057   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:48.612259   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:48.745856   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:48.831626   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:48.919818   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:49.110445   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:49.113149   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:49.330947   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:49.420697   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:49.610823   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:49.614325   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:49.831623   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:49.931155   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:50.110346   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:50.112068   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:50.330767   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:50.430670   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:50.610209   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:50.612790   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:50.833961   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:50.920182   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:51.110673   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:51.112165   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:51.244767   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:51.330778   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:51.426470   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:51.609605   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:51.612205   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:51.835039   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:51.919628   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:52.110353   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:52.112127   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:52.330373   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:52.419906   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:52.609977   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:52.612338   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:52.830740   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:52.920476   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:53.110013   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:53.112069   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:53.330824   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:53.420327   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:53.609824   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:53.612247   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:53.743548   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:53.831223   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:53.919864   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:54.110006   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:54.112421   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:54.330218   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:54.420994   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:54.670769   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:54.671197   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:54.830666   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:54.919507   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:55.110906   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:55.115800   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:55.330470   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:55.420089   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:55.610828   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:55.612069   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:55.743620   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:55.830591   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:55.919522   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:56.110881   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:56.112244   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:56.330834   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:56.419681   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:56.609982   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:56.613117   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:56.831991   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:56.922248   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:57.109646   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:57.112207   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:57.330988   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:57.421018   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:57.610511   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:57.612342   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:57.745892   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:57:57.831616   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:57.920108   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:58.110977   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:58.112570   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:58.331976   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:58.419610   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:58.610154   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:58.612828   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:58.831375   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:58.920541   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:59.109908   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:59.112145   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:59.336310   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:59.420168   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:57:59.609658   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:57:59.612601   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:57:59.830710   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:57:59.920136   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:00.109208   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:00.111887   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:00.244934   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:00.330724   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:00.423654   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:00.610798   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:00.612507   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:00.830787   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:00.921239   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:01.115378   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:01.115828   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:01.333382   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:01.419903   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:01.611367   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:01.611961   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:01.832486   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:01.919951   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:02.110486   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:02.112988   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:02.331648   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:02.430779   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:02.610496   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:02.613580   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:02.745985   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:02.832836   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:02.920881   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:03.111453   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:03.113819   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:03.330944   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:03.421730   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:03.610347   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:03.612839   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:03.830117   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:03.919932   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:04.110885   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:04.112315   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:04.331165   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:04.420116   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:04.610615   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:04.612856   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:04.746508   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:04.833069   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:04.920382   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:05.110235   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:05.112564   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:05.331572   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:05.420770   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:05.609505   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:05.611774   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:05.831362   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:05.920394   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:06.110126   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:06.113321   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:06.330687   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:06.420145   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:06.610668   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:06.612300   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:06.747556   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:06.831761   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:06.931490   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:07.110060   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:07.112692   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:07.330925   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:07.420257   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:07.611784   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:07.612386   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:07.830828   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:07.919457   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:08.110968   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:08.111921   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:08.330333   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:08.420424   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:08.609699   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:08.611837   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:08.831002   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:08.920445   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:09.109645   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:09.111883   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:09.243793   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:09.330685   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:09.420080   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:09.610724   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:09.612348   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:09.831523   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:09.920359   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:10.111213   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:10.111592   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:10.329903   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:10.420324   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:10.609564   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:10.611834   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:10.830857   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:10.919803   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:11.110812   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:11.112594   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:11.244806   19099 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:11.331852   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:11.420822   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:11.610563   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:11.613554   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:11.831048   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:11.920598   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:12.113745   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:12.213990   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:12.246331   19099 pod_ready.go:93] pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace has status "Ready":"True"
	I0927 16:58:12.246365   19099 pod_ready.go:82] duration metric: took 29.508169455s for pod "coredns-7c65d6cfc9-b4zg9" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.246376   19099 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.253774   19099 pod_ready.go:93] pod "etcd-addons-511364" in "kube-system" namespace has status "Ready":"True"
	I0927 16:58:12.253799   19099 pod_ready.go:82] duration metric: took 7.416013ms for pod "etcd-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.253809   19099 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.262601   19099 pod_ready.go:93] pod "kube-apiserver-addons-511364" in "kube-system" namespace has status "Ready":"True"
	I0927 16:58:12.262627   19099 pod_ready.go:82] duration metric: took 8.811286ms for pod "kube-apiserver-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.262637   19099 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.269493   19099 pod_ready.go:93] pod "kube-controller-manager-addons-511364" in "kube-system" namespace has status "Ready":"True"
	I0927 16:58:12.269522   19099 pod_ready.go:82] duration metric: took 6.866338ms for pod "kube-controller-manager-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.269538   19099 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkzgg" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.277112   19099 pod_ready.go:93] pod "kube-proxy-xkzgg" in "kube-system" namespace has status "Ready":"True"
	I0927 16:58:12.277142   19099 pod_ready.go:82] duration metric: took 7.596315ms for pod "kube-proxy-xkzgg" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.277152   19099 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.332367   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:12.422225   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:12.610884   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:12.613359   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:12.641914   19099 pod_ready.go:93] pod "kube-scheduler-addons-511364" in "kube-system" namespace has status "Ready":"True"
	I0927 16:58:12.641944   19099 pod_ready.go:82] duration metric: took 364.785614ms for pod "kube-scheduler-addons-511364" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.641958   19099 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace to be "Ready" ...
	I0927 16:58:12.831696   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:12.920747   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:13.110808   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:13.112118   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:13.332574   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:13.419618   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:13.609932   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:13.612386   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:13.830747   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:13.920153   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:14.110756   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:14.112168   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:14.331708   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:14.419597   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:14.610393   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:14.612497   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:14.649061   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:14.830310   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:14.919714   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:15.110317   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:15.112442   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:15.330829   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:15.420632   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:15.609807   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:15.612254   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:15.830754   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:15.920126   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:16.112573   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:16.124490   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:16.331811   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:16.421836   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:16.610757   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:16.614168   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:16.836328   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:16.936085   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:17.110692   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:17.112264   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:17.148316   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:17.330242   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:17.419714   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:17.610331   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:17.613012   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:17.830545   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:17.921406   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:18.109412   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:18.112231   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:18.331190   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:18.420899   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:18.610131   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:18.612934   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:18.830856   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:18.920279   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:19.109717   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:19.111540   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:19.148678   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:19.330788   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:19.419986   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:19.610379   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:19.612928   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:19.830966   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:19.920650   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:20.109959   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:20.112493   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:20.330201   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:20.419939   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:20.610264   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:20.612507   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:20.961589   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:20.962462   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:21.110587   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:21.112220   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:21.330116   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:21.422010   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:21.610694   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:21.612438   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:21.648643   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:21.829708   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:21.919462   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:22.110524   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:22.112572   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:22.330511   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:22.420348   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:22.610520   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:22.612482   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:22.832796   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:22.920240   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:23.115808   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 16:58:23.116618   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:23.331214   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:23.422466   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:23.609386   19099 kapi.go:107] duration metric: took 48.503384732s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 16:58:23.612005   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:23.655297   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:23.830213   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:23.920002   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:24.113145   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:24.331334   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:24.431395   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:24.613033   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:24.830177   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:24.920594   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:25.112717   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:25.334535   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:25.420722   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:25.613326   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:25.831074   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:25.920206   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:26.113115   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:26.148687   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:26.330852   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:26.420245   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:26.612098   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:26.831166   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:26.920579   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:27.112651   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:27.330246   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:27.419392   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:27.612290   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:27.830626   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:27.930205   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:28.112754   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:28.331823   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:28.420665   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:28.612138   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:28.647543   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:28.831186   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:28.922176   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:29.114443   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:29.340628   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:29.420569   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:29.612549   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:29.831321   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:29.920796   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:30.113285   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:30.330993   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:30.431009   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:30.613443   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:30.649207   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:30.830771   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:30.920091   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:31.114745   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:31.338202   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:31.420697   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:31.612345   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:31.830841   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:31.919440   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:32.112385   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:32.332505   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:32.421422   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:32.612114   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:32.832405   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:32.921027   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:33.113528   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:33.150474   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:33.331305   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:33.420535   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:33.612692   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:33.830949   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:33.920665   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:34.121591   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:34.338068   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:34.420105   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:34.613802   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:34.830785   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:34.920039   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:35.113036   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:35.331283   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:35.421618   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:35.612784   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:35.649204   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:35.830045   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:35.921248   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:36.113941   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:36.331135   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:36.433173   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:36.613646   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:36.831299   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:36.920616   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:37.113255   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:37.330987   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:37.420073   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:37.613003   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:37.831169   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:37.931244   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:38.113562   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:38.149465   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:38.330704   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:38.419979   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:38.612856   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:38.830782   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:38.919926   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:39.114083   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:39.330775   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:39.419666   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:39.613229   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:39.837224   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:39.927199   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:40.113184   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:40.332117   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:40.421107   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:40.613487   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:40.651484   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:40.830761   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:40.922331   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:41.113686   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:41.339046   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:41.420430   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:41.612349   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:41.830197   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:41.920754   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:42.112679   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:42.330370   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:42.419336   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:42.611880   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:42.830800   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:42.920499   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:43.360663   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:43.361788   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:43.363029   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:43.561753   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:43.612721   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:43.833503   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:43.920534   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:44.113432   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:44.329765   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:44.420050   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:44.612682   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:44.830352   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:44.920770   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:45.113774   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:45.331023   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:45.420304   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:45.614079   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:45.649914   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:45.830985   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:45.920982   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:46.112883   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:46.330092   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:46.420186   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:46.612750   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:46.830557   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:46.920481   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:47.112071   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:47.330161   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:47.421115   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:47.613176   19099 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0927 16:58:47.653923   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:47.831661   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:47.920521   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:48.116350   19099 kapi.go:107] duration metric: took 1m13.008276022s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0927 16:58:48.331128   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:48.419977   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:48.830203   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:48.919985   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:49.657244   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:49.658467   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:49.662220   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:49.830764   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:49.922500   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:50.331339   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:50.420329   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:50.831261   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:50.919636   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:51.331225   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:51.431062   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:51.831626   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:51.921069   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:52.148869   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:52.331806   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:52.419836   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:52.830571   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:52.926723   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:53.332547   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:53.421226   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:53.831014   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:53.919929   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:54.150921   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:54.330973   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:54.419731   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:54.832917   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:54.932875   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 16:58:55.334632   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:55.420542   19099 kapi.go:107] duration metric: took 1m17.504148775s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 16:58:55.422529   19099 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-511364 cluster.
	I0927 16:58:55.423998   19099 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 16:58:55.425578   19099 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 16:58:55.831502   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:56.331487   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:56.648855   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:58:56.831251   19099 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 16:58:57.331181   19099 kapi.go:107] duration metric: took 1m21.005284207s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 16:58:57.333170   19099 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, nvidia-device-plugin, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0927 16:58:57.334418   19099 addons.go:510] duration metric: took 1m30.584618927s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner inspektor-gadget nvidia-device-plugin yakd metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0927 16:58:58.655014   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:01.147897   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:03.148226   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:05.149539   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:07.648380   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:09.649414   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:12.149284   19099 pod_ready.go:103] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"False"
	I0927 16:59:13.148984   19099 pod_ready.go:93] pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace has status "Ready":"True"
	I0927 16:59:13.149010   19099 pod_ready.go:82] duration metric: took 1m0.507044512s for pod "metrics-server-84c5f94fbc-967wf" in "kube-system" namespace to be "Ready" ...
	I0927 16:59:13.149020   19099 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mxrwc" in "kube-system" namespace to be "Ready" ...
	I0927 16:59:13.159818   19099 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-mxrwc" in "kube-system" namespace has status "Ready":"True"
	I0927 16:59:13.159841   19099 pod_ready.go:82] duration metric: took 10.815441ms for pod "nvidia-device-plugin-daemonset-mxrwc" in "kube-system" namespace to be "Ready" ...
	I0927 16:59:13.159857   19099 pod_ready.go:39] duration metric: took 1m38.039625904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 16:59:13.159872   19099 api_server.go:52] waiting for apiserver process to appear ...
	I0927 16:59:13.159899   19099 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 16:59:13.159946   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 16:59:13.212673   19099 cri.go:89] found id: "2eb9ae477c9242dcdee7a06ce1266dde999d972b78fc61c7a838925d224d7ac0"
	I0927 16:59:13.212695   19099 cri.go:89] found id: ""
	I0927 16:59:13.212702   19099 logs.go:276] 1 containers: [2eb9ae477c9242dcdee7a06ce1266dde999d972b78fc61c7a838925d224d7ac0]
	I0927 16:59:13.212747   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:59:13.217539   19099 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 16:59:13.217597   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 16:59:13.255957   19099 cri.go:89] found id: "e7af58e2f9698d5adaccbaa943efc8d8d1a4dd4bb57c4e05e5761474968a89a3"
	I0927 16:59:13.255980   19099 cri.go:89] found id: ""
	I0927 16:59:13.255989   19099 logs.go:276] 1 containers: [e7af58e2f9698d5adaccbaa943efc8d8d1a4dd4bb57c4e05e5761474968a89a3]
	I0927 16:59:13.256048   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:59:13.260399   19099 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 16:59:13.260456   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 16:59:13.297082   19099 cri.go:89] found id: "b6ca6a217bfaf394037b7f838b0088850f046d2bb4c68e403ad0d4295ab1fda6"
	I0927 16:59:13.297102   19099 cri.go:89] found id: ""
	I0927 16:59:13.297109   19099 logs.go:276] 1 containers: [b6ca6a217bfaf394037b7f838b0088850f046d2bb4c68e403ad0d4295ab1fda6]
	I0927 16:59:13.297151   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:59:13.301151   19099 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 16:59:13.301216   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 16:59:13.346314   19099 cri.go:89] found id: "bcc9c32c40714c12e271b9cd0e3243dc1ce0b5f987d9cd516fb4529226c7c9e8"
	I0927 16:59:13.346336   19099 cri.go:89] found id: ""
	I0927 16:59:13.346344   19099 logs.go:276] 1 containers: [bcc9c32c40714c12e271b9cd0e3243dc1ce0b5f987d9cd516fb4529226c7c9e8]
	I0927 16:59:13.346402   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:59:13.350680   19099 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 16:59:13.350747   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 16:59:13.395965   19099 cri.go:89] found id: "262becbe55d77639071720151ec96c48ed07925b43f2604e5f4938e72d066b0f"
	I0927 16:59:13.395989   19099 cri.go:89] found id: ""
	I0927 16:59:13.395997   19099 logs.go:276] 1 containers: [262becbe55d77639071720151ec96c48ed07925b43f2604e5f4938e72d066b0f]
	I0927 16:59:13.396054   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:59:13.399921   19099 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 16:59:13.399986   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 16:59:13.438820   19099 cri.go:89] found id: "1c59cd2da5b8f8dfa17191ce6c3da4c0834354ad74d4ddae3ae5ef673c32ac67"
	I0927 16:59:13.438842   19099 cri.go:89] found id: ""
	I0927 16:59:13.438851   19099 logs.go:276] 1 containers: [1c59cd2da5b8f8dfa17191ce6c3da4c0834354ad74d4ddae3ae5ef673c32ac67]
	I0927 16:59:13.438910   19099 ssh_runner.go:195] Run: which crictl
	I0927 16:59:13.443297   19099 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 16:59:13.443365   19099 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 16:59:13.483873   19099 cri.go:89] found id: ""
	I0927 16:59:13.483909   19099 logs.go:276] 0 containers: []
	W0927 16:59:13.483928   19099 logs.go:278] No container was found matching "kindnet"
	I0927 16:59:13.483937   19099 logs.go:123] Gathering logs for kubelet ...
	I0927 16:59:13.483947   19099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 16:59:13.569194   19099 logs.go:123] Gathering logs for coredns [b6ca6a217bfaf394037b7f838b0088850f046d2bb4c68e403ad0d4295ab1fda6] ...
	I0927 16:59:13.569238   19099 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6ca6a217bfaf394037b7f838b0088850f046d2bb4c68e403ad0d4295ab1fda6"
	I0927 16:59:13.606507   19099 logs.go:123] Gathering logs for CRI-O ...
	I0927 16:59:13.606540   19099 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-linux-amd64 start -p addons-511364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns failed: signal: killed
--- FAIL: TestAddons/Setup (2400.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 node stop m02 -v=7 --alsologtostderr
E0927 17:45:57.975006   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:46:38.936380   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-748477 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.492681533s)

                                                
                                                
-- stdout --
	* Stopping node "ha-748477-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:45:50.817262   37153 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:45:50.817411   37153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:45:50.817421   37153 out.go:358] Setting ErrFile to fd 2...
	I0927 17:45:50.817426   37153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:45:50.817603   37153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:45:50.817883   37153 mustload.go:65] Loading cluster: ha-748477
	I0927 17:45:50.818281   37153 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:45:50.818301   37153 stop.go:39] StopHost: ha-748477-m02
	I0927 17:45:50.818911   37153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:45:50.818962   37153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:45:50.835187   37153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0927 17:45:50.835771   37153 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:45:50.836477   37153 main.go:141] libmachine: Using API Version  1
	I0927 17:45:50.836517   37153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:45:50.837139   37153 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:45:50.839616   37153 out.go:177] * Stopping node "ha-748477-m02"  ...
	I0927 17:45:50.840808   37153 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 17:45:50.840842   37153 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:45:50.841094   37153 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 17:45:50.841130   37153 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:45:50.843993   37153 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:45:50.844503   37153 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:45:50.844549   37153 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:45:50.844632   37153 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:45:50.844815   37153 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:45:50.844954   37153 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:45:50.845086   37153 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:45:50.940351   37153 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 17:45:50.999921   37153 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 17:45:51.056217   37153 main.go:141] libmachine: Stopping "ha-748477-m02"...
	I0927 17:45:51.056283   37153 main.go:141] libmachine: (ha-748477-m02) Calling .GetState
	I0927 17:45:51.057909   37153 main.go:141] libmachine: (ha-748477-m02) Calling .Stop
	I0927 17:45:51.062344   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 0/120
	I0927 17:45:52.063774   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 1/120
	I0927 17:45:53.065232   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 2/120
	I0927 17:45:54.066625   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 3/120
	I0927 17:45:55.068049   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 4/120
	I0927 17:45:56.069507   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 5/120
	I0927 17:45:57.070882   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 6/120
	I0927 17:45:58.073048   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 7/120
	I0927 17:45:59.074480   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 8/120
	I0927 17:46:00.075714   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 9/120
	I0927 17:46:01.077297   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 10/120
	I0927 17:46:02.078887   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 11/120
	I0927 17:46:03.081308   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 12/120
	I0927 17:46:04.082743   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 13/120
	I0927 17:46:05.084054   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 14/120
	I0927 17:46:06.086031   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 15/120
	I0927 17:46:07.087442   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 16/120
	I0927 17:46:08.089223   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 17/120
	I0927 17:46:09.090542   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 18/120
	I0927 17:46:10.091925   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 19/120
	I0927 17:46:11.094158   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 20/120
	I0927 17:46:12.095562   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 21/120
	I0927 17:46:13.096998   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 22/120
	I0927 17:46:14.099049   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 23/120
	I0927 17:46:15.100339   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 24/120
	I0927 17:46:16.102356   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 25/120
	I0927 17:46:17.104123   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 26/120
	I0927 17:46:18.105902   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 27/120
	I0927 17:46:19.107982   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 28/120
	I0927 17:46:20.110195   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 29/120
	I0927 17:46:21.111822   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 30/120
	I0927 17:46:22.113426   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 31/120
	I0927 17:46:23.115312   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 32/120
	I0927 17:46:24.117175   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 33/120
	I0927 17:46:25.118720   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 34/120
	I0927 17:46:26.120552   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 35/120
	I0927 17:46:27.121806   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 36/120
	I0927 17:46:28.123073   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 37/120
	I0927 17:46:29.124690   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 38/120
	I0927 17:46:30.126457   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 39/120
	I0927 17:46:31.128853   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 40/120
	I0927 17:46:32.130285   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 41/120
	I0927 17:46:33.131551   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 42/120
	I0927 17:46:34.132934   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 43/120
	I0927 17:46:35.134282   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 44/120
	I0927 17:46:36.136222   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 45/120
	I0927 17:46:37.137710   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 46/120
	I0927 17:46:38.139337   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 47/120
	I0927 17:46:39.140954   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 48/120
	I0927 17:46:40.142502   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 49/120
	I0927 17:46:41.144277   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 50/120
	I0927 17:46:42.145583   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 51/120
	I0927 17:46:43.147183   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 52/120
	I0927 17:46:44.149714   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 53/120
	I0927 17:46:45.151141   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 54/120
	I0927 17:46:46.152946   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 55/120
	I0927 17:46:47.154494   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 56/120
	I0927 17:46:48.155801   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 57/120
	I0927 17:46:49.158700   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 58/120
	I0927 17:46:50.160480   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 59/120
	I0927 17:46:51.162242   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 60/120
	I0927 17:46:52.163689   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 61/120
	I0927 17:46:53.165167   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 62/120
	I0927 17:46:54.166521   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 63/120
	I0927 17:46:55.167765   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 64/120
	I0927 17:46:56.169577   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 65/120
	I0927 17:46:57.171155   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 66/120
	I0927 17:46:58.173308   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 67/120
	I0927 17:46:59.174733   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 68/120
	I0927 17:47:00.175990   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 69/120
	I0927 17:47:01.178409   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 70/120
	I0927 17:47:02.179819   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 71/120
	I0927 17:47:03.181873   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 72/120
	I0927 17:47:04.183245   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 73/120
	I0927 17:47:05.184722   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 74/120
	I0927 17:47:06.186816   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 75/120
	I0927 17:47:07.189462   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 76/120
	I0927 17:47:08.191008   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 77/120
	I0927 17:47:09.193487   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 78/120
	I0927 17:47:10.195101   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 79/120
	I0927 17:47:11.197069   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 80/120
	I0927 17:47:12.198570   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 81/120
	I0927 17:47:13.200321   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 82/120
	I0927 17:47:14.201743   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 83/120
	I0927 17:47:15.203293   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 84/120
	I0927 17:47:16.205168   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 85/120
	I0927 17:47:17.206468   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 86/120
	I0927 17:47:18.208460   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 87/120
	I0927 17:47:19.209892   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 88/120
	I0927 17:47:20.211769   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 89/120
	I0927 17:47:21.214156   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 90/120
	I0927 17:47:22.215778   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 91/120
	I0927 17:47:23.217283   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 92/120
	I0927 17:47:24.218428   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 93/120
	I0927 17:47:25.219804   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 94/120
	I0927 17:47:26.221935   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 95/120
	I0927 17:47:27.223488   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 96/120
	I0927 17:47:28.225434   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 97/120
	I0927 17:47:29.227016   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 98/120
	I0927 17:47:30.228462   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 99/120
	I0927 17:47:31.230687   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 100/120
	I0927 17:47:32.232114   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 101/120
	I0927 17:47:33.234005   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 102/120
	I0927 17:47:34.235669   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 103/120
	I0927 17:47:35.237404   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 104/120
	I0927 17:47:36.239027   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 105/120
	I0927 17:47:37.241540   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 106/120
	I0927 17:47:38.243343   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 107/120
	I0927 17:47:39.244743   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 108/120
	I0927 17:47:40.246311   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 109/120
	I0927 17:47:41.248513   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 110/120
	I0927 17:47:42.250072   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 111/120
	I0927 17:47:43.252138   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 112/120
	I0927 17:47:44.253616   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 113/120
	I0927 17:47:45.255062   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 114/120
	I0927 17:47:46.257355   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 115/120
	I0927 17:47:47.259058   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 116/120
	I0927 17:47:48.261243   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 117/120
	I0927 17:47:49.262990   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 118/120
	I0927 17:47:50.264578   37153 main.go:141] libmachine: (ha-748477-m02) Waiting for machine to stop 119/120
	I0927 17:47:51.265666   37153 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 17:47:51.265795   37153 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-748477 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
E0927 17:48:00.858851   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr: (18.888937873s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-748477 -n ha-748477
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 logs -n 25: (1.388348962s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m03_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m04 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp testdata/cp-test.txt                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m04_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03:/home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m03 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-748477 node stop m02 -v=7                                                     | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:41:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:41:11.282351   33104 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:41:11.282459   33104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:41:11.282464   33104 out.go:358] Setting ErrFile to fd 2...
	I0927 17:41:11.282469   33104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:41:11.282697   33104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:41:11.283272   33104 out.go:352] Setting JSON to false
	I0927 17:41:11.284134   33104 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5016,"bootTime":1727453855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:41:11.284236   33104 start.go:139] virtualization: kvm guest
	I0927 17:41:11.286413   33104 out.go:177] * [ha-748477] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:41:11.288037   33104 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:41:11.288045   33104 notify.go:220] Checking for updates...
	I0927 17:41:11.289671   33104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:41:11.291343   33104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:11.293056   33104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.294702   33104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:41:11.296107   33104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:41:11.297727   33104 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:41:11.334964   33104 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 17:41:11.336448   33104 start.go:297] selected driver: kvm2
	I0927 17:41:11.336470   33104 start.go:901] validating driver "kvm2" against <nil>
	I0927 17:41:11.336482   33104 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:41:11.337172   33104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:41:11.337254   33104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 17:41:11.353494   33104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 17:41:11.353573   33104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 17:41:11.353841   33104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:41:11.353874   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:11.353916   33104 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 17:41:11.353921   33104 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 17:41:11.353981   33104 start.go:340] cluster config:
	{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0927 17:41:11.354070   33104 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:41:11.356133   33104 out.go:177] * Starting "ha-748477" primary control-plane node in "ha-748477" cluster
	I0927 17:41:11.357496   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:41:11.357561   33104 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 17:41:11.357574   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:41:11.357669   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:41:11.357682   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:41:11.358001   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:41:11.358028   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json: {Name:mke89db25d5d216a50900f26b95b8fd2ee54cc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:11.358189   33104 start.go:360] acquireMachinesLock for ha-748477: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:41:11.358227   33104 start.go:364] duration metric: took 22.952µs to acquireMachinesLock for "ha-748477"
	I0927 17:41:11.358249   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:41:11.358314   33104 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 17:41:11.360140   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:41:11.360316   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:11.360378   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:11.375306   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0927 17:41:11.375759   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:11.376301   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:11.376329   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:11.376675   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:11.376850   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:11.377007   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:11.377148   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:41:11.377181   33104 client.go:168] LocalClient.Create starting
	I0927 17:41:11.377218   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:41:11.377295   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:41:11.377314   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:41:11.377384   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:41:11.377413   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:41:11.377441   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:41:11.377466   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:41:11.377486   33104 main.go:141] libmachine: (ha-748477) Calling .PreCreateCheck
	I0927 17:41:11.377873   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:11.378248   33104 main.go:141] libmachine: Creating machine...
	I0927 17:41:11.378289   33104 main.go:141] libmachine: (ha-748477) Calling .Create
	I0927 17:41:11.378436   33104 main.go:141] libmachine: (ha-748477) Creating KVM machine...
	I0927 17:41:11.379983   33104 main.go:141] libmachine: (ha-748477) DBG | found existing default KVM network
	I0927 17:41:11.380694   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.380548   33127 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b50}
	I0927 17:41:11.380717   33104 main.go:141] libmachine: (ha-748477) DBG | created network xml: 
	I0927 17:41:11.380729   33104 main.go:141] libmachine: (ha-748477) DBG | <network>
	I0927 17:41:11.380736   33104 main.go:141] libmachine: (ha-748477) DBG |   <name>mk-ha-748477</name>
	I0927 17:41:11.380744   33104 main.go:141] libmachine: (ha-748477) DBG |   <dns enable='no'/>
	I0927 17:41:11.380751   33104 main.go:141] libmachine: (ha-748477) DBG |   
	I0927 17:41:11.380761   33104 main.go:141] libmachine: (ha-748477) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 17:41:11.380765   33104 main.go:141] libmachine: (ha-748477) DBG |     <dhcp>
	I0927 17:41:11.380773   33104 main.go:141] libmachine: (ha-748477) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 17:41:11.380778   33104 main.go:141] libmachine: (ha-748477) DBG |     </dhcp>
	I0927 17:41:11.380786   33104 main.go:141] libmachine: (ha-748477) DBG |   </ip>
	I0927 17:41:11.380790   33104 main.go:141] libmachine: (ha-748477) DBG |   
	I0927 17:41:11.380886   33104 main.go:141] libmachine: (ha-748477) DBG | </network>
	I0927 17:41:11.380936   33104 main.go:141] libmachine: (ha-748477) DBG | 
	I0927 17:41:11.386015   33104 main.go:141] libmachine: (ha-748477) DBG | trying to create private KVM network mk-ha-748477 192.168.39.0/24...
	I0927 17:41:11.458118   33104 main.go:141] libmachine: (ha-748477) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 ...
	I0927 17:41:11.458145   33104 main.go:141] libmachine: (ha-748477) DBG | private KVM network mk-ha-748477 192.168.39.0/24 created
	I0927 17:41:11.458158   33104 main.go:141] libmachine: (ha-748477) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:41:11.458170   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.458056   33127 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.458262   33104 main.go:141] libmachine: (ha-748477) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:41:11.695851   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.695688   33127 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa...
	I0927 17:41:11.894120   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.893958   33127 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/ha-748477.rawdisk...
	I0927 17:41:11.894152   33104 main.go:141] libmachine: (ha-748477) DBG | Writing magic tar header
	I0927 17:41:11.894162   33104 main.go:141] libmachine: (ha-748477) DBG | Writing SSH key tar header
	I0927 17:41:11.894171   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.894079   33127 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 ...
	I0927 17:41:11.894191   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477
	I0927 17:41:11.894234   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 (perms=drwx------)
	I0927 17:41:11.894262   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:41:11.894278   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:41:11.894286   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:41:11.894294   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.894300   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:41:11.894308   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:41:11.894314   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:41:11.894322   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home
	I0927 17:41:11.894332   33104 main.go:141] libmachine: (ha-748477) DBG | Skipping /home - not owner
	I0927 17:41:11.894350   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:41:11.894382   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:41:11.894396   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:41:11.894409   33104 main.go:141] libmachine: (ha-748477) Creating domain...
	I0927 17:41:11.895515   33104 main.go:141] libmachine: (ha-748477) define libvirt domain using xml: 
	I0927 17:41:11.895554   33104 main.go:141] libmachine: (ha-748477) <domain type='kvm'>
	I0927 17:41:11.895564   33104 main.go:141] libmachine: (ha-748477)   <name>ha-748477</name>
	I0927 17:41:11.895570   33104 main.go:141] libmachine: (ha-748477)   <memory unit='MiB'>2200</memory>
	I0927 17:41:11.895577   33104 main.go:141] libmachine: (ha-748477)   <vcpu>2</vcpu>
	I0927 17:41:11.895582   33104 main.go:141] libmachine: (ha-748477)   <features>
	I0927 17:41:11.895589   33104 main.go:141] libmachine: (ha-748477)     <acpi/>
	I0927 17:41:11.895594   33104 main.go:141] libmachine: (ha-748477)     <apic/>
	I0927 17:41:11.895600   33104 main.go:141] libmachine: (ha-748477)     <pae/>
	I0927 17:41:11.895611   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.895618   33104 main.go:141] libmachine: (ha-748477)   </features>
	I0927 17:41:11.895625   33104 main.go:141] libmachine: (ha-748477)   <cpu mode='host-passthrough'>
	I0927 17:41:11.895636   33104 main.go:141] libmachine: (ha-748477)   
	I0927 17:41:11.895642   33104 main.go:141] libmachine: (ha-748477)   </cpu>
	I0927 17:41:11.895652   33104 main.go:141] libmachine: (ha-748477)   <os>
	I0927 17:41:11.895658   33104 main.go:141] libmachine: (ha-748477)     <type>hvm</type>
	I0927 17:41:11.895667   33104 main.go:141] libmachine: (ha-748477)     <boot dev='cdrom'/>
	I0927 17:41:11.895677   33104 main.go:141] libmachine: (ha-748477)     <boot dev='hd'/>
	I0927 17:41:11.895684   33104 main.go:141] libmachine: (ha-748477)     <bootmenu enable='no'/>
	I0927 17:41:11.895695   33104 main.go:141] libmachine: (ha-748477)   </os>
	I0927 17:41:11.895726   33104 main.go:141] libmachine: (ha-748477)   <devices>
	I0927 17:41:11.895746   33104 main.go:141] libmachine: (ha-748477)     <disk type='file' device='cdrom'>
	I0927 17:41:11.895755   33104 main.go:141] libmachine: (ha-748477)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/boot2docker.iso'/>
	I0927 17:41:11.895767   33104 main.go:141] libmachine: (ha-748477)       <target dev='hdc' bus='scsi'/>
	I0927 17:41:11.895779   33104 main.go:141] libmachine: (ha-748477)       <readonly/>
	I0927 17:41:11.895787   33104 main.go:141] libmachine: (ha-748477)     </disk>
	I0927 17:41:11.895799   33104 main.go:141] libmachine: (ha-748477)     <disk type='file' device='disk'>
	I0927 17:41:11.895810   33104 main.go:141] libmachine: (ha-748477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:41:11.895825   33104 main.go:141] libmachine: (ha-748477)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/ha-748477.rawdisk'/>
	I0927 17:41:11.895835   33104 main.go:141] libmachine: (ha-748477)       <target dev='hda' bus='virtio'/>
	I0927 17:41:11.895843   33104 main.go:141] libmachine: (ha-748477)     </disk>
	I0927 17:41:11.895850   33104 main.go:141] libmachine: (ha-748477)     <interface type='network'>
	I0927 17:41:11.895865   33104 main.go:141] libmachine: (ha-748477)       <source network='mk-ha-748477'/>
	I0927 17:41:11.895880   33104 main.go:141] libmachine: (ha-748477)       <model type='virtio'/>
	I0927 17:41:11.895892   33104 main.go:141] libmachine: (ha-748477)     </interface>
	I0927 17:41:11.895902   33104 main.go:141] libmachine: (ha-748477)     <interface type='network'>
	I0927 17:41:11.895912   33104 main.go:141] libmachine: (ha-748477)       <source network='default'/>
	I0927 17:41:11.895923   33104 main.go:141] libmachine: (ha-748477)       <model type='virtio'/>
	I0927 17:41:11.895932   33104 main.go:141] libmachine: (ha-748477)     </interface>
	I0927 17:41:11.895944   33104 main.go:141] libmachine: (ha-748477)     <serial type='pty'>
	I0927 17:41:11.895957   33104 main.go:141] libmachine: (ha-748477)       <target port='0'/>
	I0927 17:41:11.895968   33104 main.go:141] libmachine: (ha-748477)     </serial>
	I0927 17:41:11.895990   33104 main.go:141] libmachine: (ha-748477)     <console type='pty'>
	I0927 17:41:11.896002   33104 main.go:141] libmachine: (ha-748477)       <target type='serial' port='0'/>
	I0927 17:41:11.896015   33104 main.go:141] libmachine: (ha-748477)     </console>
	I0927 17:41:11.896031   33104 main.go:141] libmachine: (ha-748477)     <rng model='virtio'>
	I0927 17:41:11.896046   33104 main.go:141] libmachine: (ha-748477)       <backend model='random'>/dev/random</backend>
	I0927 17:41:11.896060   33104 main.go:141] libmachine: (ha-748477)     </rng>
	I0927 17:41:11.896070   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.896076   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.896083   33104 main.go:141] libmachine: (ha-748477)   </devices>
	I0927 17:41:11.896087   33104 main.go:141] libmachine: (ha-748477) </domain>
	I0927 17:41:11.896095   33104 main.go:141] libmachine: (ha-748477) 
	I0927 17:41:11.900567   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:73:40:b9 in network default
	I0927 17:41:11.901061   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:11.901075   33104 main.go:141] libmachine: (ha-748477) Ensuring networks are active...
	I0927 17:41:11.901826   33104 main.go:141] libmachine: (ha-748477) Ensuring network default is active
	I0927 17:41:11.902116   33104 main.go:141] libmachine: (ha-748477) Ensuring network mk-ha-748477 is active
	I0927 17:41:11.902614   33104 main.go:141] libmachine: (ha-748477) Getting domain xml...
	I0927 17:41:11.903566   33104 main.go:141] libmachine: (ha-748477) Creating domain...
	I0927 17:41:13.125948   33104 main.go:141] libmachine: (ha-748477) Waiting to get IP...
	I0927 17:41:13.126613   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.126980   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.127001   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.126925   33127 retry.go:31] will retry after 221.741675ms: waiting for machine to come up
	I0927 17:41:13.350389   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.350866   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.350891   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.350820   33127 retry.go:31] will retry after 384.917671ms: waiting for machine to come up
	I0927 17:41:13.737469   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.737940   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.737963   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.737901   33127 retry.go:31] will retry after 357.409754ms: waiting for machine to come up
	I0927 17:41:14.096593   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:14.097137   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:14.097157   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:14.097100   33127 retry.go:31] will retry after 455.369509ms: waiting for machine to come up
	I0927 17:41:14.553700   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:14.554092   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:14.554138   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:14.554063   33127 retry.go:31] will retry after 555.024151ms: waiting for machine to come up
	I0927 17:41:15.111039   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:15.111576   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:15.111596   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:15.111511   33127 retry.go:31] will retry after 767.019564ms: waiting for machine to come up
	I0927 17:41:15.880561   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:15.880971   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:15.881009   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:15.880933   33127 retry.go:31] will retry after 930.894786ms: waiting for machine to come up
	I0927 17:41:16.814028   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:16.814547   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:16.814568   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:16.814503   33127 retry.go:31] will retry after 1.391282407s: waiting for machine to come up
	I0927 17:41:18.208116   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:18.208453   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:18.208476   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:18.208423   33127 retry.go:31] will retry after 1.406630844s: waiting for machine to come up
	I0927 17:41:19.617054   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:19.617491   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:19.617513   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:19.617444   33127 retry.go:31] will retry after 1.955568674s: waiting for machine to come up
	I0927 17:41:21.574672   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:21.575031   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:21.575056   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:21.574984   33127 retry.go:31] will retry after 2.462121776s: waiting for machine to come up
	I0927 17:41:24.039742   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:24.040176   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:24.040197   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:24.040139   33127 retry.go:31] will retry after 3.071571928s: waiting for machine to come up
	I0927 17:41:27.113044   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:27.113494   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:27.113522   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:27.113444   33127 retry.go:31] will retry after 3.158643907s: waiting for machine to come up
	I0927 17:41:30.273431   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:30.273901   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:30.273928   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:30.273851   33127 retry.go:31] will retry after 4.144134204s: waiting for machine to come up
	I0927 17:41:34.421621   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.421958   33104 main.go:141] libmachine: (ha-748477) Found IP for machine: 192.168.39.217
	I0927 17:41:34.421985   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has current primary IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.421995   33104 main.go:141] libmachine: (ha-748477) Reserving static IP address...
	I0927 17:41:34.422371   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find host DHCP lease matching {name: "ha-748477", mac: "52:54:00:cf:7b:81", ip: "192.168.39.217"} in network mk-ha-748477
	I0927 17:41:34.496658   33104 main.go:141] libmachine: (ha-748477) Reserved static IP address: 192.168.39.217
	I0927 17:41:34.496683   33104 main.go:141] libmachine: (ha-748477) Waiting for SSH to be available...
	I0927 17:41:34.496692   33104 main.go:141] libmachine: (ha-748477) DBG | Getting to WaitForSSH function...
	I0927 17:41:34.499481   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.499883   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477
	I0927 17:41:34.499908   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find defined IP address of network mk-ha-748477 interface with MAC address 52:54:00:cf:7b:81
	I0927 17:41:34.500086   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH client type: external
	I0927 17:41:34.500117   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa (-rw-------)
	I0927 17:41:34.500142   33104 main.go:141] libmachine: (ha-748477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:41:34.500152   33104 main.go:141] libmachine: (ha-748477) DBG | About to run SSH command:
	I0927 17:41:34.500164   33104 main.go:141] libmachine: (ha-748477) DBG | exit 0
	I0927 17:41:34.503851   33104 main.go:141] libmachine: (ha-748477) DBG | SSH cmd err, output: exit status 255: 
	I0927 17:41:34.503922   33104 main.go:141] libmachine: (ha-748477) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 17:41:34.503936   33104 main.go:141] libmachine: (ha-748477) DBG | command : exit 0
	I0927 17:41:34.503943   33104 main.go:141] libmachine: (ha-748477) DBG | err     : exit status 255
	I0927 17:41:34.503959   33104 main.go:141] libmachine: (ha-748477) DBG | output  : 
	I0927 17:41:37.504545   33104 main.go:141] libmachine: (ha-748477) DBG | Getting to WaitForSSH function...
	I0927 17:41:37.507144   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.507648   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.507672   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.507819   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH client type: external
	I0927 17:41:37.507868   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa (-rw-------)
	I0927 17:41:37.507900   33104 main.go:141] libmachine: (ha-748477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:41:37.507920   33104 main.go:141] libmachine: (ha-748477) DBG | About to run SSH command:
	I0927 17:41:37.507941   33104 main.go:141] libmachine: (ha-748477) DBG | exit 0
	I0927 17:41:37.630810   33104 main.go:141] libmachine: (ha-748477) DBG | SSH cmd err, output: <nil>: 
	I0927 17:41:37.631066   33104 main.go:141] libmachine: (ha-748477) KVM machine creation complete!
	I0927 17:41:37.631372   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:37.631910   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:37.632095   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:37.632272   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:41:37.632285   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:37.633516   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:41:37.633528   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:41:37.633533   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:41:37.633550   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.635751   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.636081   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.636099   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.636220   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.636388   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.636532   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.636625   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.636778   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.636951   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.636961   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:41:37.734259   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:41:37.734293   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:41:37.734303   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.737128   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.737466   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.737495   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.737627   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.737846   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.737998   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.738153   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.738274   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.738468   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.738480   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:41:37.835159   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:41:37.835214   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:41:37.835220   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:41:37.835227   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:37.835463   33104 buildroot.go:166] provisioning hostname "ha-748477"
	I0927 17:41:37.835485   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:37.835646   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.838659   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.838974   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.838995   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.839272   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.839470   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.839648   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.839769   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.839931   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.840140   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.840159   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477 && echo "ha-748477" | sudo tee /etc/hostname
	I0927 17:41:37.952689   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:41:37.952711   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.955478   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.955872   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.955904   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.956089   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.956272   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.956442   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.956569   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.956706   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.956867   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.956881   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:41:38.063375   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:41:38.063408   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:41:38.063477   33104 buildroot.go:174] setting up certificates
	I0927 17:41:38.063491   33104 provision.go:84] configureAuth start
	I0927 17:41:38.063509   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:38.063799   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.066439   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.066780   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.066808   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.066982   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.069059   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.069387   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.069405   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.069581   33104 provision.go:143] copyHostCerts
	I0927 17:41:38.069625   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:41:38.069666   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:41:38.069678   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:41:38.069763   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:41:38.069850   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:41:38.069876   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:41:38.069882   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:41:38.069916   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:41:38.069980   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:41:38.070006   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:41:38.070015   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:41:38.070049   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:41:38.070101   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477 san=[127.0.0.1 192.168.39.217 ha-748477 localhost minikube]
	I0927 17:41:38.147021   33104 provision.go:177] copyRemoteCerts
	I0927 17:41:38.147089   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:41:38.147110   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.149977   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.150246   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.150274   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.150432   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.150602   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.150754   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.150921   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.228142   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:41:38.228227   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:41:38.251467   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:41:38.251538   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 17:41:38.274370   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:41:38.274489   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:41:38.296698   33104 provision.go:87] duration metric: took 233.191722ms to configureAuth
	I0927 17:41:38.296732   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:41:38.296932   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:41:38.297016   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.299619   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.299927   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.299966   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.300128   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.300322   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.300479   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.300682   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.300851   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:38.301048   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:38.301067   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:41:38.523444   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:41:38.523472   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:41:38.523483   33104 main.go:141] libmachine: (ha-748477) Calling .GetURL
	I0927 17:41:38.524760   33104 main.go:141] libmachine: (ha-748477) DBG | Using libvirt version 6000000
	I0927 17:41:38.527048   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.527364   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.527391   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.527606   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:41:38.527637   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:41:38.527650   33104 client.go:171] duration metric: took 27.150459274s to LocalClient.Create
	I0927 17:41:38.527678   33104 start.go:167] duration metric: took 27.150528415s to libmachine.API.Create "ha-748477"
	I0927 17:41:38.527690   33104 start.go:293] postStartSetup for "ha-748477" (driver="kvm2")
	I0927 17:41:38.527705   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:41:38.527728   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.527972   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:41:38.528001   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.530216   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.530626   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.530665   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.530772   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.530924   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.531065   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.531219   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.609034   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:41:38.613222   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:41:38.613247   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:41:38.613317   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:41:38.613401   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:41:38.613411   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:41:38.613506   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:41:38.622717   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:41:38.645459   33104 start.go:296] duration metric: took 117.75234ms for postStartSetup
	I0927 17:41:38.645507   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:38.646122   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.648685   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.648941   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.648975   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.649188   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:41:38.649458   33104 start.go:128] duration metric: took 27.291131215s to createHost
	I0927 17:41:38.649491   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.651737   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.652093   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.652119   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.652302   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.652471   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.652616   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.652728   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.652843   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:38.653010   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:38.653020   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:41:38.751064   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458898.732716995
	
	I0927 17:41:38.751086   33104 fix.go:216] guest clock: 1727458898.732716995
	I0927 17:41:38.751094   33104 fix.go:229] Guest: 2024-09-27 17:41:38.732716995 +0000 UTC Remote: 2024-09-27 17:41:38.649473144 +0000 UTC m=+27.402870254 (delta=83.243851ms)
	I0927 17:41:38.751135   33104 fix.go:200] guest clock delta is within tolerance: 83.243851ms
	I0927 17:41:38.751145   33104 start.go:83] releasing machines lock for "ha-748477", held for 27.392909773s
	I0927 17:41:38.751166   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.751423   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.754190   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.754506   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.754527   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.754757   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755262   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755415   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755525   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:41:38.755565   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.755625   33104 ssh_runner.go:195] Run: cat /version.json
	I0927 17:41:38.755649   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.758113   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758305   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758445   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.758479   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758603   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.758725   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.758751   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758761   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.758893   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.758901   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.759041   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.759038   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.759157   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.759261   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.831198   33104 ssh_runner.go:195] Run: systemctl --version
	I0927 17:41:38.870670   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:41:39.025889   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:41:39.031712   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:41:39.031797   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:41:39.047705   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:41:39.047735   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:41:39.047802   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:41:39.063366   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:41:39.077273   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:41:39.077334   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:41:39.090744   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:41:39.103931   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:41:39.214425   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:41:39.364442   33104 docker.go:233] disabling docker service ...
	I0927 17:41:39.364513   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:41:39.380260   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:41:39.394355   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:41:39.522355   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:41:39.649820   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:41:39.663016   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:41:39.680505   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:41:39.680564   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.690319   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:41:39.690383   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.699872   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.709466   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.719082   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:41:39.729267   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.739369   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.757384   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.767495   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:41:39.776770   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:41:39.776822   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:41:39.789488   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:41:39.798777   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:41:39.926081   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:41:40.015516   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:41:40.015581   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:41:40.020128   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:41:40.020188   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:41:40.023698   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:41:40.059901   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:41:40.059966   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:41:40.086976   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:41:40.115858   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:41:40.117036   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:40.119598   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:40.119937   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:40.119968   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:40.120181   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:41:40.124032   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:41:40.135947   33104 kubeadm.go:883] updating cluster {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 17:41:40.136051   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:41:40.136092   33104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:41:40.165756   33104 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 17:41:40.165826   33104 ssh_runner.go:195] Run: which lz4
	I0927 17:41:40.169366   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0927 17:41:40.169454   33104 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 17:41:40.173416   33104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 17:41:40.173444   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 17:41:41.416629   33104 crio.go:462] duration metric: took 1.247195052s to copy over tarball
	I0927 17:41:41.416710   33104 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 17:41:43.420793   33104 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.004054416s)
	I0927 17:41:43.420819   33104 crio.go:469] duration metric: took 2.004155312s to extract the tarball
	I0927 17:41:43.420825   33104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 17:41:43.457422   33104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:41:43.499761   33104 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:41:43.499782   33104 cache_images.go:84] Images are preloaded, skipping loading
	I0927 17:41:43.499792   33104 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.1 crio true true} ...
	I0927 17:41:43.499910   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:41:43.499992   33104 ssh_runner.go:195] Run: crio config
	I0927 17:41:43.543198   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:43.543224   33104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 17:41:43.543236   33104 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 17:41:43.543262   33104 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-748477 NodeName:ha-748477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 17:41:43.543436   33104 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-748477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 17:41:43.543460   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:41:43.543509   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:41:43.558812   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:41:43.558948   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:41:43.559015   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:41:43.568537   33104 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 17:41:43.568607   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 17:41:43.577953   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0927 17:41:43.593972   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:41:43.611240   33104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 17:41:43.627698   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0927 17:41:43.643839   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:41:43.647475   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:41:43.658814   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:41:43.786484   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:41:43.804054   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.217
	I0927 17:41:43.804083   33104 certs.go:194] generating shared ca certs ...
	I0927 17:41:43.804104   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:43.804286   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:41:43.804341   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:41:43.804355   33104 certs.go:256] generating profile certs ...
	I0927 17:41:43.804425   33104 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:41:43.804453   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt with IP's: []
	I0927 17:41:44.048105   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt ...
	I0927 17:41:44.048135   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt: {Name:mkd7683af781c2e3035297a91fe64cae3ec441ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.048290   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key ...
	I0927 17:41:44.048301   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key: {Name:mk936ca4ca8308f6e8f8130ae52fa2d91744c76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.048375   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce
	I0927 17:41:44.048390   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.254]
	I0927 17:41:44.272337   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce ...
	I0927 17:41:44.272368   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce: {Name:mkf1d6d3812ecb98203f4090aef1221789d1a599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.272516   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce ...
	I0927 17:41:44.272528   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce: {Name:mkb32ad35d33db5f9c4a13f60989170569fbf531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.272591   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:41:44.272698   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:41:44.272754   33104 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:41:44.272768   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt with IP's: []
	I0927 17:41:44.519852   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt ...
	I0927 17:41:44.519879   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt: {Name:mk1051474491995de79f8f5636180a2c0021f95c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.520021   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key ...
	I0927 17:41:44.520031   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key: {Name:mkad9e4d33b049f5b649702366bd9b4b30c4cec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.520090   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:41:44.520107   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:41:44.520117   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:41:44.520140   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:41:44.520152   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:41:44.520167   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:41:44.520179   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:41:44.520191   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:41:44.520236   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:41:44.520268   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:41:44.520279   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:41:44.520308   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:41:44.520329   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:41:44.520350   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:41:44.520386   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:41:44.520410   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.520426   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.520438   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.521064   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:41:44.546442   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:41:44.578778   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:41:44.609231   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:41:44.633930   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 17:41:44.658617   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 17:41:44.684890   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:41:44.709741   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:41:44.734927   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:41:44.758813   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:41:44.782007   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:41:44.806214   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 17:41:44.823670   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:41:44.829647   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:41:44.840856   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.846133   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.846189   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.852561   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:41:44.864442   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:41:44.875936   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.880730   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.880801   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.886623   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:41:44.897721   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:41:44.909287   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.914201   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.914262   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.920052   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:41:44.931726   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:41:44.936188   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:41:44.936247   33104 kubeadm.go:392] StartCluster: {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:41:44.936344   33104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 17:41:44.936410   33104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 17:41:44.979358   33104 cri.go:89] found id: ""
	I0927 17:41:44.979433   33104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 17:41:44.989817   33104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 17:41:45.002904   33104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 17:41:45.014738   33104 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 17:41:45.014760   33104 kubeadm.go:157] found existing configuration files:
	
	I0927 17:41:45.014817   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 17:41:45.024092   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 17:41:45.024152   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 17:41:45.033904   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 17:41:45.043382   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 17:41:45.043439   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 17:41:45.052729   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 17:41:45.062303   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 17:41:45.062382   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 17:41:45.073359   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 17:41:45.082763   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 17:41:45.082834   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 17:41:45.093349   33104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 17:41:45.190478   33104 kubeadm.go:310] W0927 17:41:45.177079     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:41:45.191151   33104 kubeadm.go:310] W0927 17:41:45.178026     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:41:45.332459   33104 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 17:41:56.118950   33104 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 17:41:56.119025   33104 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 17:41:56.119141   33104 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 17:41:56.119282   33104 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 17:41:56.119422   33104 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 17:41:56.119505   33104 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 17:41:56.121450   33104 out.go:235]   - Generating certificates and keys ...
	I0927 17:41:56.121521   33104 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 17:41:56.121578   33104 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 17:41:56.121641   33104 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 17:41:56.121689   33104 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 17:41:56.121748   33104 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 17:41:56.121792   33104 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 17:41:56.121837   33104 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 17:41:56.121974   33104 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-748477 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0927 17:41:56.122044   33104 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 17:41:56.122168   33104 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-748477 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0927 17:41:56.122242   33104 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 17:41:56.122342   33104 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 17:41:56.122390   33104 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 17:41:56.122467   33104 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 17:41:56.122542   33104 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 17:41:56.122616   33104 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 17:41:56.122697   33104 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 17:41:56.122753   33104 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 17:41:56.122800   33104 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 17:41:56.122872   33104 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 17:41:56.122939   33104 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 17:41:56.124312   33104 out.go:235]   - Booting up control plane ...
	I0927 17:41:56.124416   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 17:41:56.124486   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 17:41:56.124538   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 17:41:56.124665   33104 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 17:41:56.124745   33104 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 17:41:56.124780   33104 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 17:41:56.124883   33104 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 17:41:56.124963   33104 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 17:41:56.125009   33104 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.127696ms
	I0927 17:41:56.125069   33104 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 17:41:56.125115   33104 kubeadm.go:310] [api-check] The API server is healthy after 6.021061385s
	I0927 17:41:56.125196   33104 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 17:41:56.125298   33104 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 17:41:56.125379   33104 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 17:41:56.125578   33104 kubeadm.go:310] [mark-control-plane] Marking the node ha-748477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 17:41:56.125630   33104 kubeadm.go:310] [bootstrap-token] Using token: hgqoqf.s456496vm8m19s9c
	I0927 17:41:56.127181   33104 out.go:235]   - Configuring RBAC rules ...
	I0927 17:41:56.127280   33104 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 17:41:56.127363   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 17:41:56.127490   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 17:41:56.127609   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 17:41:56.127704   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 17:41:56.127779   33104 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 17:41:56.127880   33104 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 17:41:56.127917   33104 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 17:41:56.127954   33104 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 17:41:56.127960   33104 kubeadm.go:310] 
	I0927 17:41:56.128007   33104 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 17:41:56.128013   33104 kubeadm.go:310] 
	I0927 17:41:56.128079   33104 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 17:41:56.128085   33104 kubeadm.go:310] 
	I0927 17:41:56.128104   33104 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 17:41:56.128151   33104 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 17:41:56.128195   33104 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 17:41:56.128202   33104 kubeadm.go:310] 
	I0927 17:41:56.128243   33104 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 17:41:56.128249   33104 kubeadm.go:310] 
	I0927 17:41:56.128286   33104 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 17:41:56.128292   33104 kubeadm.go:310] 
	I0927 17:41:56.128338   33104 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 17:41:56.128406   33104 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 17:41:56.128466   33104 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 17:41:56.128474   33104 kubeadm.go:310] 
	I0927 17:41:56.128548   33104 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 17:41:56.128620   33104 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 17:41:56.128629   33104 kubeadm.go:310] 
	I0927 17:41:56.128700   33104 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hgqoqf.s456496vm8m19s9c \
	I0927 17:41:56.128804   33104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 \
	I0927 17:41:56.128840   33104 kubeadm.go:310] 	--control-plane 
	I0927 17:41:56.128853   33104 kubeadm.go:310] 
	I0927 17:41:56.128959   33104 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 17:41:56.128965   33104 kubeadm.go:310] 
	I0927 17:41:56.129032   33104 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hgqoqf.s456496vm8m19s9c \
	I0927 17:41:56.129135   33104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 
	I0927 17:41:56.129145   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:56.129152   33104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 17:41:56.130873   33104 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 17:41:56.132138   33104 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 17:41:56.137758   33104 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 17:41:56.137776   33104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 17:41:56.158395   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 17:41:56.545302   33104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 17:41:56.545392   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:56.545450   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477 minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=true
	I0927 17:41:56.591362   33104 ops.go:34] apiserver oom_adj: -16
	I0927 17:41:56.760276   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:57.260604   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:57.760791   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:58.261339   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:58.760457   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.260517   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.760470   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.868738   33104 kubeadm.go:1113] duration metric: took 3.32341585s to wait for elevateKubeSystemPrivileges
	I0927 17:41:59.868781   33104 kubeadm.go:394] duration metric: took 14.932536309s to StartCluster
	I0927 17:41:59.868801   33104 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:59.868885   33104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:59.869758   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:59.870009   33104 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:41:59.870033   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 17:41:59.870039   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:41:59.870060   33104 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 17:41:59.870153   33104 addons.go:69] Setting storage-provisioner=true in profile "ha-748477"
	I0927 17:41:59.870163   33104 addons.go:69] Setting default-storageclass=true in profile "ha-748477"
	I0927 17:41:59.870172   33104 addons.go:234] Setting addon storage-provisioner=true in "ha-748477"
	I0927 17:41:59.870182   33104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-748477"
	I0927 17:41:59.870204   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:41:59.870252   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:41:59.870584   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.870621   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.870672   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.870714   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.886004   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0927 17:41:59.886153   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0927 17:41:59.886564   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.886600   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.887110   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.887133   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.887228   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.887251   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.887515   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.887575   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.887749   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.888058   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.888106   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.889954   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:59.890260   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 17:41:59.890780   33104 cert_rotation.go:140] Starting client certificate rotation controller
	I0927 17:41:59.891045   33104 addons.go:234] Setting addon default-storageclass=true in "ha-748477"
	I0927 17:41:59.891088   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:41:59.891458   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.891503   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.903067   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0927 17:41:59.903643   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.904195   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.904216   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.904591   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.904788   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.906479   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:59.907260   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0927 17:41:59.907760   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.908176   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.908198   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.908493   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.908731   33104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 17:41:59.909071   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.909112   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.910017   33104 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:41:59.910034   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 17:41:59.910047   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:59.912776   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.913203   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:59.913230   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.913350   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:59.913531   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:59.913696   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:59.913877   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:59.924467   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44339
	I0927 17:41:59.924928   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.925397   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.925419   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.925727   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.925908   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.927570   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:59.927761   33104 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 17:41:59.927779   33104 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 17:41:59.927796   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:59.930818   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.931197   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:59.931223   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.931372   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:59.931551   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:59.931697   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:59.931825   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:59.972954   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 17:42:00.031245   33104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:42:00.108187   33104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 17:42:00.508824   33104 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 17:42:00.769682   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.769710   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.769738   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.769760   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770044   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770066   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770083   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770095   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770104   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.770114   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770154   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.770162   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770305   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770325   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770489   33104 main.go:141] libmachine: (ha-748477) DBG | Closing plugin on server side
	I0927 17:42:00.770511   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770537   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770589   33104 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 17:42:00.770615   33104 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 17:42:00.770734   33104 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0927 17:42:00.770749   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:00.770760   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:00.770772   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:00.784878   33104 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0927 17:42:00.785650   33104 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0927 17:42:00.785672   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:00.785684   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:00.785689   33104 round_trippers.go:473]     Content-Type: application/json
	I0927 17:42:00.785695   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:00.797693   33104 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0927 17:42:00.797883   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.797901   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.798229   33104 main.go:141] libmachine: (ha-748477) DBG | Closing plugin on server side
	I0927 17:42:00.798283   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.798298   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.800228   33104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 17:42:00.801634   33104 addons.go:510] duration metric: took 931.586908ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 17:42:00.801675   33104 start.go:246] waiting for cluster config update ...
	I0927 17:42:00.801692   33104 start.go:255] writing updated cluster config ...
	I0927 17:42:00.803627   33104 out.go:201] 
	I0927 17:42:00.805265   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:00.805361   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:00.807406   33104 out.go:177] * Starting "ha-748477-m02" control-plane node in "ha-748477" cluster
	I0927 17:42:00.809474   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:42:00.809516   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:42:00.809668   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:42:00.809688   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:42:00.809795   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:00.810056   33104 start.go:360] acquireMachinesLock for ha-748477-m02: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:42:00.810115   33104 start.go:364] duration metric: took 34.075µs to acquireMachinesLock for "ha-748477-m02"
	I0927 17:42:00.810139   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:00.810241   33104 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0927 17:42:00.812114   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:42:00.812247   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:00.812304   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:00.827300   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0927 17:42:00.827815   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:00.828325   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:00.828351   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:00.828634   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:00.828813   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:00.828931   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:00.829052   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:42:00.829102   33104 client.go:168] LocalClient.Create starting
	I0927 17:42:00.829156   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:42:00.829194   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:42:00.829211   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:42:00.829254   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:42:00.829271   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:42:00.829282   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:42:00.829297   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:42:00.829305   33104 main.go:141] libmachine: (ha-748477-m02) Calling .PreCreateCheck
	I0927 17:42:00.829460   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:00.829822   33104 main.go:141] libmachine: Creating machine...
	I0927 17:42:00.829839   33104 main.go:141] libmachine: (ha-748477-m02) Calling .Create
	I0927 17:42:00.829995   33104 main.go:141] libmachine: (ha-748477-m02) Creating KVM machine...
	I0927 17:42:00.831397   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found existing default KVM network
	I0927 17:42:00.831514   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found existing private KVM network mk-ha-748477
	I0927 17:42:00.831650   33104 main.go:141] libmachine: (ha-748477-m02) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 ...
	I0927 17:42:00.831667   33104 main.go:141] libmachine: (ha-748477-m02) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:42:00.831765   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:00.831653   33474 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:42:00.831855   33104 main.go:141] libmachine: (ha-748477-m02) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:42:01.074875   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.074746   33474 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa...
	I0927 17:42:01.284394   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.284285   33474 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/ha-748477-m02.rawdisk...
	I0927 17:42:01.284431   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Writing magic tar header
	I0927 17:42:01.284445   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Writing SSH key tar header
	I0927 17:42:01.285094   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.284993   33474 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 ...
	I0927 17:42:01.285131   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02
	I0927 17:42:01.285145   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 (perms=drwx------)
	I0927 17:42:01.285162   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:42:01.285184   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:42:01.285194   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:42:01.285208   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:42:01.285223   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:42:01.285233   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:42:01.285245   33104 main.go:141] libmachine: (ha-748477-m02) Creating domain...
	I0927 17:42:01.285258   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:42:01.285272   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:42:01.285288   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:42:01.285298   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:42:01.285311   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home
	I0927 17:42:01.285320   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Skipping /home - not owner
	I0927 17:42:01.286214   33104 main.go:141] libmachine: (ha-748477-m02) define libvirt domain using xml: 
	I0927 17:42:01.286236   33104 main.go:141] libmachine: (ha-748477-m02) <domain type='kvm'>
	I0927 17:42:01.286246   33104 main.go:141] libmachine: (ha-748477-m02)   <name>ha-748477-m02</name>
	I0927 17:42:01.286259   33104 main.go:141] libmachine: (ha-748477-m02)   <memory unit='MiB'>2200</memory>
	I0927 17:42:01.286286   33104 main.go:141] libmachine: (ha-748477-m02)   <vcpu>2</vcpu>
	I0927 17:42:01.286306   33104 main.go:141] libmachine: (ha-748477-m02)   <features>
	I0927 17:42:01.286319   33104 main.go:141] libmachine: (ha-748477-m02)     <acpi/>
	I0927 17:42:01.286326   33104 main.go:141] libmachine: (ha-748477-m02)     <apic/>
	I0927 17:42:01.286334   33104 main.go:141] libmachine: (ha-748477-m02)     <pae/>
	I0927 17:42:01.286340   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286348   33104 main.go:141] libmachine: (ha-748477-m02)   </features>
	I0927 17:42:01.286353   33104 main.go:141] libmachine: (ha-748477-m02)   <cpu mode='host-passthrough'>
	I0927 17:42:01.286361   33104 main.go:141] libmachine: (ha-748477-m02)   
	I0927 17:42:01.286365   33104 main.go:141] libmachine: (ha-748477-m02)   </cpu>
	I0927 17:42:01.286372   33104 main.go:141] libmachine: (ha-748477-m02)   <os>
	I0927 17:42:01.286377   33104 main.go:141] libmachine: (ha-748477-m02)     <type>hvm</type>
	I0927 17:42:01.286386   33104 main.go:141] libmachine: (ha-748477-m02)     <boot dev='cdrom'/>
	I0927 17:42:01.286396   33104 main.go:141] libmachine: (ha-748477-m02)     <boot dev='hd'/>
	I0927 17:42:01.286408   33104 main.go:141] libmachine: (ha-748477-m02)     <bootmenu enable='no'/>
	I0927 17:42:01.286417   33104 main.go:141] libmachine: (ha-748477-m02)   </os>
	I0927 17:42:01.286442   33104 main.go:141] libmachine: (ha-748477-m02)   <devices>
	I0927 17:42:01.286465   33104 main.go:141] libmachine: (ha-748477-m02)     <disk type='file' device='cdrom'>
	I0927 17:42:01.286483   33104 main.go:141] libmachine: (ha-748477-m02)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/boot2docker.iso'/>
	I0927 17:42:01.286494   33104 main.go:141] libmachine: (ha-748477-m02)       <target dev='hdc' bus='scsi'/>
	I0927 17:42:01.286503   33104 main.go:141] libmachine: (ha-748477-m02)       <readonly/>
	I0927 17:42:01.286512   33104 main.go:141] libmachine: (ha-748477-m02)     </disk>
	I0927 17:42:01.286521   33104 main.go:141] libmachine: (ha-748477-m02)     <disk type='file' device='disk'>
	I0927 17:42:01.286532   33104 main.go:141] libmachine: (ha-748477-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:42:01.286553   33104 main.go:141] libmachine: (ha-748477-m02)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/ha-748477-m02.rawdisk'/>
	I0927 17:42:01.286577   33104 main.go:141] libmachine: (ha-748477-m02)       <target dev='hda' bus='virtio'/>
	I0927 17:42:01.286589   33104 main.go:141] libmachine: (ha-748477-m02)     </disk>
	I0927 17:42:01.286596   33104 main.go:141] libmachine: (ha-748477-m02)     <interface type='network'>
	I0927 17:42:01.286606   33104 main.go:141] libmachine: (ha-748477-m02)       <source network='mk-ha-748477'/>
	I0927 17:42:01.286615   33104 main.go:141] libmachine: (ha-748477-m02)       <model type='virtio'/>
	I0927 17:42:01.286623   33104 main.go:141] libmachine: (ha-748477-m02)     </interface>
	I0927 17:42:01.286631   33104 main.go:141] libmachine: (ha-748477-m02)     <interface type='network'>
	I0927 17:42:01.286637   33104 main.go:141] libmachine: (ha-748477-m02)       <source network='default'/>
	I0927 17:42:01.286669   33104 main.go:141] libmachine: (ha-748477-m02)       <model type='virtio'/>
	I0927 17:42:01.286682   33104 main.go:141] libmachine: (ha-748477-m02)     </interface>
	I0927 17:42:01.286689   33104 main.go:141] libmachine: (ha-748477-m02)     <serial type='pty'>
	I0927 17:42:01.286700   33104 main.go:141] libmachine: (ha-748477-m02)       <target port='0'/>
	I0927 17:42:01.286710   33104 main.go:141] libmachine: (ha-748477-m02)     </serial>
	I0927 17:42:01.286718   33104 main.go:141] libmachine: (ha-748477-m02)     <console type='pty'>
	I0927 17:42:01.286745   33104 main.go:141] libmachine: (ha-748477-m02)       <target type='serial' port='0'/>
	I0927 17:42:01.286757   33104 main.go:141] libmachine: (ha-748477-m02)     </console>
	I0927 17:42:01.286769   33104 main.go:141] libmachine: (ha-748477-m02)     <rng model='virtio'>
	I0927 17:42:01.286780   33104 main.go:141] libmachine: (ha-748477-m02)       <backend model='random'>/dev/random</backend>
	I0927 17:42:01.286789   33104 main.go:141] libmachine: (ha-748477-m02)     </rng>
	I0927 17:42:01.286798   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286805   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286814   33104 main.go:141] libmachine: (ha-748477-m02)   </devices>
	I0927 17:42:01.286821   33104 main.go:141] libmachine: (ha-748477-m02) </domain>
	I0927 17:42:01.286829   33104 main.go:141] libmachine: (ha-748477-m02) 
	I0927 17:42:01.295323   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:dc:55:b0 in network default
	I0927 17:42:01.296033   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring networks are active...
	I0927 17:42:01.296060   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:01.297259   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring network default is active
	I0927 17:42:01.297652   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring network mk-ha-748477 is active
	I0927 17:42:01.298102   33104 main.go:141] libmachine: (ha-748477-m02) Getting domain xml...
	I0927 17:42:01.298966   33104 main.go:141] libmachine: (ha-748477-m02) Creating domain...
	I0927 17:42:02.564561   33104 main.go:141] libmachine: (ha-748477-m02) Waiting to get IP...
	I0927 17:42:02.565309   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:02.565769   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:02.565802   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:02.565771   33474 retry.go:31] will retry after 303.737915ms: waiting for machine to come up
	I0927 17:42:02.871429   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:02.871830   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:02.871854   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:02.871802   33474 retry.go:31] will retry after 330.658569ms: waiting for machine to come up
	I0927 17:42:03.204264   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:03.204715   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:03.204739   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:03.204669   33474 retry.go:31] will retry after 480.920904ms: waiting for machine to come up
	I0927 17:42:03.687319   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:03.687901   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:03.687922   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:03.687827   33474 retry.go:31] will retry after 531.287792ms: waiting for machine to come up
	I0927 17:42:04.220560   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:04.221117   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:04.221147   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:04.221064   33474 retry.go:31] will retry after 645.559246ms: waiting for machine to come up
	I0927 17:42:04.867651   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:04.868069   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:04.868092   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:04.868034   33474 retry.go:31] will retry after 621.251066ms: waiting for machine to come up
	I0927 17:42:05.491583   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:05.492060   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:05.492081   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:05.492018   33474 retry.go:31] will retry after 1.144789742s: waiting for machine to come up
	I0927 17:42:06.638697   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:06.639055   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:06.639079   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:06.639012   33474 retry.go:31] will retry after 1.297542087s: waiting for machine to come up
	I0927 17:42:07.937857   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:07.938263   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:07.938304   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:07.938221   33474 retry.go:31] will retry after 1.728772395s: waiting for machine to come up
	I0927 17:42:09.668990   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:09.669424   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:09.669449   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:09.669386   33474 retry.go:31] will retry after 1.816616404s: waiting for machine to come up
	I0927 17:42:11.487206   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:11.487803   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:11.487830   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:11.487752   33474 retry.go:31] will retry after 2.262897527s: waiting for machine to come up
	I0927 17:42:13.751754   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:13.752138   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:13.752156   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:13.752109   33474 retry.go:31] will retry after 2.651419719s: waiting for machine to come up
	I0927 17:42:16.404625   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:16.405063   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:16.405087   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:16.405019   33474 retry.go:31] will retry after 2.90839218s: waiting for machine to come up
	I0927 17:42:19.317108   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:19.317506   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:19.317528   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:19.317483   33474 retry.go:31] will retry after 5.075657253s: waiting for machine to come up
	I0927 17:42:24.396494   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.396873   33104 main.go:141] libmachine: (ha-748477-m02) Found IP for machine: 192.168.39.58
	I0927 17:42:24.396891   33104 main.go:141] libmachine: (ha-748477-m02) Reserving static IP address...
	I0927 17:42:24.396899   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has current primary IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.397346   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find host DHCP lease matching {name: "ha-748477-m02", mac: "52:54:00:70:40:9e", ip: "192.168.39.58"} in network mk-ha-748477
	I0927 17:42:24.472936   33104 main.go:141] libmachine: (ha-748477-m02) Reserved static IP address: 192.168.39.58
	I0927 17:42:24.472971   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Getting to WaitForSSH function...
	I0927 17:42:24.472980   33104 main.go:141] libmachine: (ha-748477-m02) Waiting for SSH to be available...
	I0927 17:42:24.475305   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.475680   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.475707   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.475845   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using SSH client type: external
	I0927 17:42:24.475874   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa (-rw-------)
	I0927 17:42:24.475906   33104 main.go:141] libmachine: (ha-748477-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:42:24.475929   33104 main.go:141] libmachine: (ha-748477-m02) DBG | About to run SSH command:
	I0927 17:42:24.475966   33104 main.go:141] libmachine: (ha-748477-m02) DBG | exit 0
	I0927 17:42:24.606575   33104 main.go:141] libmachine: (ha-748477-m02) DBG | SSH cmd err, output: <nil>: 
	I0927 17:42:24.606899   33104 main.go:141] libmachine: (ha-748477-m02) KVM machine creation complete!
	I0927 17:42:24.607222   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:24.607761   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:24.607936   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:24.608087   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:42:24.608100   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetState
	I0927 17:42:24.609395   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:42:24.609407   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:42:24.609412   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:42:24.609417   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.611533   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.611868   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.611888   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.612022   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.612209   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.612399   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.612547   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.612697   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.612879   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.612890   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:42:24.725891   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:42:24.725919   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:42:24.725930   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.728630   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.728976   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.729006   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.729191   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.729340   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.729487   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.729609   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.729734   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.730028   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.730047   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:42:24.843111   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:42:24.843154   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:42:24.843160   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:42:24.843168   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:24.843396   33104 buildroot.go:166] provisioning hostname "ha-748477-m02"
	I0927 17:42:24.843419   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:24.843631   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.846504   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.847013   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.847039   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.847168   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.847341   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.847483   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.847608   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.847738   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.847896   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.847908   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477-m02 && echo "ha-748477-m02" | sudo tee /etc/hostname
	I0927 17:42:24.977249   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477-m02
	
	I0927 17:42:24.977281   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.980072   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.980385   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.980420   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.980605   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.980758   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.980898   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.980996   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.981123   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.981324   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.981348   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:42:25.103047   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:42:25.103077   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:42:25.103095   33104 buildroot.go:174] setting up certificates
	I0927 17:42:25.103105   33104 provision.go:84] configureAuth start
	I0927 17:42:25.103113   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:25.103329   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.105948   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.106264   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.106287   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.106466   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.109004   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.109390   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.109418   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.109562   33104 provision.go:143] copyHostCerts
	I0927 17:42:25.109608   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:42:25.109641   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:42:25.109649   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:42:25.109714   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:42:25.109782   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:42:25.109802   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:42:25.109808   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:42:25.109832   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:42:25.109873   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:42:25.109891   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:42:25.109897   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:42:25.109916   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:42:25.109964   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477-m02 san=[127.0.0.1 192.168.39.58 ha-748477-m02 localhost minikube]
	I0927 17:42:25.258618   33104 provision.go:177] copyRemoteCerts
	I0927 17:42:25.258690   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:42:25.258710   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.261212   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.261548   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.261586   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.261707   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.261895   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.262022   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.262183   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.348808   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:42:25.348876   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:42:25.372365   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:42:25.372460   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:42:25.397105   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:42:25.397179   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:42:25.422506   33104 provision.go:87] duration metric: took 319.390123ms to configureAuth
	I0927 17:42:25.422532   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:42:25.422731   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:25.422799   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.425981   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.426408   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.426451   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.426606   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.426811   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.426969   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.427088   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.427226   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:25.427394   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:25.427408   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:42:25.661521   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:42:25.661549   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:42:25.661558   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetURL
	I0927 17:42:25.662897   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using libvirt version 6000000
	I0927 17:42:25.665077   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.665379   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.665406   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.665564   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:42:25.665578   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:42:25.665585   33104 client.go:171] duration metric: took 24.836463256s to LocalClient.Create
	I0927 17:42:25.665605   33104 start.go:167] duration metric: took 24.836555157s to libmachine.API.Create "ha-748477"
	I0927 17:42:25.665614   33104 start.go:293] postStartSetup for "ha-748477-m02" (driver="kvm2")
	I0927 17:42:25.665623   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:42:25.665638   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.665877   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:42:25.665912   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.668048   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.668346   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.668368   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.668516   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.668698   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.668825   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.668921   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.756903   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:42:25.761205   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:42:25.761239   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:42:25.761301   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:42:25.761393   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:42:25.761406   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:42:25.761506   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:42:25.771507   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:42:25.794679   33104 start.go:296] duration metric: took 129.051968ms for postStartSetup
	I0927 17:42:25.794731   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:25.795430   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.797924   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.798413   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.798536   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.798704   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:25.798927   33104 start.go:128] duration metric: took 24.988675406s to createHost
	I0927 17:42:25.798952   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.801621   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.801988   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.802014   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.802223   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.802493   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.802671   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.802846   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.803001   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:25.803176   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:25.803187   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:42:25.919256   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458945.878335898
	
	I0927 17:42:25.919284   33104 fix.go:216] guest clock: 1727458945.878335898
	I0927 17:42:25.919291   33104 fix.go:229] Guest: 2024-09-27 17:42:25.878335898 +0000 UTC Remote: 2024-09-27 17:42:25.79893912 +0000 UTC m=+74.552336236 (delta=79.396778ms)
	I0927 17:42:25.919305   33104 fix.go:200] guest clock delta is within tolerance: 79.396778ms
	I0927 17:42:25.919309   33104 start.go:83] releasing machines lock for "ha-748477-m02", held for 25.109183327s
	I0927 17:42:25.919328   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.919584   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.923127   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.923545   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.923567   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.925887   33104 out.go:177] * Found network options:
	I0927 17:42:25.927311   33104 out.go:177]   - NO_PROXY=192.168.39.217
	W0927 17:42:25.928478   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:42:25.928534   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929113   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929289   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929384   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:42:25.929413   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	W0927 17:42:25.929520   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:42:25.929601   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:42:25.929627   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.932151   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932175   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932560   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.932590   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932615   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.932630   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932752   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.932954   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.932961   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.933111   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.933120   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.933235   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.933296   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.933372   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:26.183554   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:42:26.189225   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:42:26.189283   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:42:26.205357   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:42:26.205380   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:42:26.205446   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:42:26.220556   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:42:26.233593   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:42:26.233652   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:42:26.247225   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:42:26.260534   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:42:26.378535   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:42:26.534217   33104 docker.go:233] disabling docker service ...
	I0927 17:42:26.534299   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:42:26.549457   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:42:26.564190   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:42:26.685257   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:42:26.798705   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:42:26.812177   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:42:26.830049   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:42:26.830103   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.840055   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:42:26.840116   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.850116   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.860785   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.870699   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:42:26.880704   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.890585   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.908416   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.918721   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:42:26.928323   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:42:26.928384   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:42:26.941204   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:42:26.951302   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:27.079256   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:42:27.173071   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:42:27.173154   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:42:27.178109   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:42:27.178161   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:42:27.181733   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:42:27.220015   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:42:27.220101   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:42:27.248905   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:42:27.278391   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:42:27.279800   33104 out.go:177]   - env NO_PROXY=192.168.39.217
	I0927 17:42:27.281146   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:27.283736   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:27.284063   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:27.284089   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:27.284314   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:42:27.288290   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:42:27.300052   33104 mustload.go:65] Loading cluster: ha-748477
	I0927 17:42:27.300240   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:27.300504   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:27.300539   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:27.315110   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I0927 17:42:27.315566   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:27.316043   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:27.316066   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:27.316373   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:27.316560   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:42:27.317977   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:42:27.318257   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:27.318292   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:27.332715   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41483
	I0927 17:42:27.333159   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:27.333632   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:27.333651   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:27.333971   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:27.334145   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:42:27.334286   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.58
	I0927 17:42:27.334297   33104 certs.go:194] generating shared ca certs ...
	I0927 17:42:27.334310   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.334448   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:42:27.334484   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:42:27.334493   33104 certs.go:256] generating profile certs ...
	I0927 17:42:27.334557   33104 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:42:27.334581   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3
	I0927 17:42:27.334596   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.254]
	I0927 17:42:27.465658   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 ...
	I0927 17:42:27.465688   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3: {Name:mkaab33c389419b06a9d77e9186d99602df50635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.465878   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3 ...
	I0927 17:42:27.465895   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3: {Name:mkd8c2f05dd9abfddfcaec4316f440a902331ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.465985   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:42:27.466113   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:42:27.466230   33104 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:42:27.466244   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:42:27.466256   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:42:27.466270   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:42:27.466282   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:42:27.466294   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:42:27.466308   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:42:27.466321   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:42:27.466333   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:42:27.466389   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:42:27.466416   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:42:27.466425   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:42:27.466444   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:42:27.466466   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:42:27.466487   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:42:27.466523   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:42:27.466547   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:42:27.466560   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:42:27.466572   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:27.466601   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:42:27.469497   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:27.469863   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:42:27.469893   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:27.470027   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:42:27.470244   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:42:27.470394   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:42:27.470523   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:42:27.543106   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 17:42:27.548154   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 17:42:27.558735   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 17:42:27.563158   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 17:42:27.573602   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 17:42:27.578182   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 17:42:27.588485   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 17:42:27.592478   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0927 17:42:27.603608   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 17:42:27.607668   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 17:42:27.620252   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 17:42:27.624885   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 17:42:27.644493   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:42:27.668339   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:42:27.691150   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:42:27.715241   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:42:27.738617   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 17:42:27.761798   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 17:42:27.784499   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:42:27.807853   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:42:27.830972   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:42:27.853871   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:42:27.876810   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:42:27.900824   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 17:42:27.917097   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 17:42:27.933218   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 17:42:27.951040   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0927 17:42:27.967600   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 17:42:27.984161   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 17:42:28.000351   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 17:42:28.016844   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:42:28.022390   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:42:28.032675   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.037756   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.037825   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.043874   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:42:28.054764   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:42:28.065690   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.070320   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.070397   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.075845   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:42:28.086186   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:42:28.096788   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.101134   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.101189   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.106935   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:42:28.117866   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:42:28.122166   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:42:28.122230   33104 kubeadm.go:934] updating node {m02 192.168.39.58 8443 v1.31.1 crio true true} ...
	I0927 17:42:28.122310   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:42:28.122340   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:42:28.122374   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:42:28.138780   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:42:28.138839   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:42:28.138889   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:42:28.148160   33104 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 17:42:28.148222   33104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 17:42:28.157728   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 17:42:28.157755   33104 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0927 17:42:28.157763   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:42:28.157776   33104 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0927 17:42:28.157830   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:42:28.161980   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 17:42:28.162007   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 17:42:29.300439   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:42:29.320131   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:42:29.320267   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:42:29.326589   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 17:42:29.326624   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 17:42:29.546925   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:42:29.547011   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:42:29.561849   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 17:42:29.561885   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 17:42:29.913564   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 17:42:29.925322   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 17:42:29.944272   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:42:29.964365   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:42:29.984051   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:42:29.988161   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:42:30.002830   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:30.137318   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:42:30.153192   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:42:30.153643   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:30.153695   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:30.169225   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0927 17:42:30.169762   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:30.170299   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:30.170317   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:30.170628   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:30.170823   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:42:30.170945   33104 start.go:317] joinCluster: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:42:30.171062   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 17:42:30.171085   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:42:30.174028   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:30.174526   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:42:30.174587   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:30.174767   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:42:30.174933   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:42:30.175042   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:42:30.175135   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:42:30.312283   33104 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:30.312328   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 65pjfr.i6bbe1dq2ien9ht7 --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m02 --control-plane --apiserver-advertise-address=192.168.39.58 --apiserver-bind-port=8443"
	I0927 17:42:51.845707   33104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 65pjfr.i6bbe1dq2ien9ht7 --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m02 --control-plane --apiserver-advertise-address=192.168.39.58 --apiserver-bind-port=8443": (21.533351476s)
	I0927 17:42:51.845746   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 17:42:52.382325   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477-m02 minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=false
	I0927 17:42:52.503362   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-748477-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 17:42:52.636002   33104 start.go:319] duration metric: took 22.465049006s to joinCluster
	I0927 17:42:52.636077   33104 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:52.636363   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:52.637939   33104 out.go:177] * Verifying Kubernetes components...
	I0927 17:42:52.639336   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:52.942345   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:42:52.995016   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:42:52.995348   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 17:42:52.995436   33104 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0927 17:42:52.995698   33104 node_ready.go:35] waiting up to 6m0s for node "ha-748477-m02" to be "Ready" ...
	I0927 17:42:52.995829   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:52.995840   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:52.995852   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:52.995860   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:53.010565   33104 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0927 17:42:53.496570   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:53.496600   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:53.496611   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:53.496618   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:53.501635   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:53.996537   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:53.996562   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:53.996573   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:53.996580   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.000293   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:54.496339   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:54.496367   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:54.496379   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:54.496386   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.500335   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:54.996231   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:54.996259   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:54.996267   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:54.996270   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.999765   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:55.000291   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:55.496156   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:55.496179   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:55.496190   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:55.496194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:55.499869   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:55.995928   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:55.995956   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:55.995967   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:55.995976   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:56.000264   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:56.496233   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:56.496262   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:56.496274   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:56.496280   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:56.508959   33104 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0927 17:42:56.996002   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:56.996027   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:56.996035   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:56.996039   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:57.000487   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:57.001143   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:57.496517   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:57.496539   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:57.496547   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:57.496551   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:57.500687   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:57.996942   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:57.996968   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:57.996980   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:57.996985   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:58.007878   33104 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0927 17:42:58.495950   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:58.495978   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:58.495986   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:58.495992   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:58.502154   33104 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0927 17:42:58.995965   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:58.995987   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:58.995994   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:58.995999   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:59.120906   33104 round_trippers.go:574] Response Status: 200 OK in 124 milliseconds
	I0927 17:42:59.121564   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:59.496878   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:59.496899   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:59.496907   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:59.496913   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:59.500334   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:59.996861   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:59.996891   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:59.996904   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:59.996909   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:00.000651   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:00.496984   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:00.497010   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:00.497020   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:00.497025   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:00.501929   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:00.996193   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:00.996216   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:00.996224   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:00.996228   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:01.000081   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:01.496245   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:01.496271   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:01.496280   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:01.496289   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:01.500327   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:01.500876   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:01.996256   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:01.996293   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:01.996319   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:01.996323   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:02.000731   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:02.496770   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:02.496794   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:02.496807   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:02.496811   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:02.499906   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:02.996753   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:02.996778   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:02.996788   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:02.996794   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.000162   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:03.496074   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:03.496103   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:03.496115   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:03.496122   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.500371   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:03.500905   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:03.996146   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:03.996168   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:03.996176   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:03.996180   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.999817   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:04.496897   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:04.496927   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:04.496938   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:04.496946   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:04.501634   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:04.996866   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:04.996886   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:04.996894   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:04.996899   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.000028   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:05.496388   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:05.496410   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:05.496417   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.496421   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:05.501021   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:05.501573   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:05.996337   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:05.996362   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:05.996371   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.996376   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:05.999502   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:06.496159   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:06.496185   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:06.496196   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:06.496201   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:06.499954   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:06.996765   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:06.996784   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:06.996792   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:06.996796   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:07.000129   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:07.496829   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:07.496853   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:07.496864   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:07.496868   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:07.499884   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:07.996447   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:07.996472   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:07.996480   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:07.996485   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:08.000400   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:08.001102   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:08.496398   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:08.496428   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:08.496436   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:08.496440   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:08.499609   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:08.996547   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:08.996584   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:08.996595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:08.996600   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:09.000044   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:09.495922   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:09.495945   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:09.495953   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:09.495957   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:09.500237   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:09.996168   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:09.996191   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:09.996199   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:09.996202   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.000717   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:10.001176   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:10.496022   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:10.496057   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.496065   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.496068   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.500059   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.500678   33104 node_ready.go:49] node "ha-748477-m02" has status "Ready":"True"
	I0927 17:43:10.500698   33104 node_ready.go:38] duration metric: took 17.504959286s for node "ha-748477-m02" to be "Ready" ...
	I0927 17:43:10.500708   33104 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:43:10.500784   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:10.500794   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.500801   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.500807   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.509536   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:43:10.516733   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.516818   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n99lr
	I0927 17:43:10.516827   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.516834   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.516839   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.520256   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.520854   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.520869   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.520876   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.520880   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.523812   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.524358   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.524373   33104 pod_ready.go:82] duration metric: took 7.610815ms for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.524381   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.524430   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qvp2z
	I0927 17:43:10.524439   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.524446   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.524450   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.527923   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.528592   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.528607   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.528614   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.528619   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.531438   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.532103   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.532118   33104 pod_ready.go:82] duration metric: took 7.732114ms for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.532126   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.532176   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477
	I0927 17:43:10.532184   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.532190   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.532194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.534800   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.535485   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.535500   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.535508   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.535514   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.539175   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.539692   33104 pod_ready.go:93] pod "etcd-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.539712   33104 pod_ready.go:82] duration metric: took 7.578916ms for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.539724   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.539792   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m02
	I0927 17:43:10.539803   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.539813   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.539818   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.542127   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.542656   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:10.542672   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.542680   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.542687   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.545034   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.545710   33104 pod_ready.go:93] pod "etcd-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.545724   33104 pod_ready.go:82] duration metric: took 5.993851ms for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.545736   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.697130   33104 request.go:632] Waited for 151.318503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:43:10.697197   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:43:10.697205   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.697216   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.697225   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.700810   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.896840   33104 request.go:632] Waited for 195.326418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.896917   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.896923   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.896933   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.896941   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.900668   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.901151   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.901172   33104 pod_ready.go:82] duration metric: took 355.430016ms for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.901182   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.096351   33104 request.go:632] Waited for 195.090932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:43:11.096408   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:43:11.096414   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.096422   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.096425   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.099605   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:11.296522   33104 request.go:632] Waited for 196.379972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:11.296583   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:11.296588   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.296595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.296599   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.299521   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:11.299966   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:11.299983   33104 pod_ready.go:82] duration metric: took 398.795354ms for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.299992   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.496407   33104 request.go:632] Waited for 196.359677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:43:11.496465   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:43:11.496470   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.496478   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.496483   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.503613   33104 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0927 17:43:11.696825   33104 request.go:632] Waited for 192.418859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:11.696922   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:11.696934   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.696944   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.696952   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.700522   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:11.701092   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:11.701110   33104 pod_ready.go:82] duration metric: took 401.113109ms for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.701119   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.896057   33104 request.go:632] Waited for 194.879526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:43:11.896120   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:43:11.896126   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.896132   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.896136   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.899805   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.096909   33104 request.go:632] Waited for 196.394213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.096966   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.096971   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.096978   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.096983   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.100042   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.100632   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.100653   33104 pod_ready.go:82] duration metric: took 399.528293ms for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.100663   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.296780   33104 request.go:632] Waited for 196.049394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:43:12.296852   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:43:12.296857   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.296864   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.296868   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.300216   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.497120   33104 request.go:632] Waited for 195.887177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.497190   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.497198   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.497208   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.497214   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.500765   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.501287   33104 pod_ready.go:93] pod "kube-proxy-kxwmh" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.501308   33104 pod_ready.go:82] duration metric: took 400.639485ms for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.501318   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.696369   33104 request.go:632] Waited for 194.968904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:43:12.696426   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:43:12.696431   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.696440   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.696444   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.699706   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.896719   33104 request.go:632] Waited for 196.366182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:12.896803   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:12.896809   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.896816   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.896823   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.900077   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.900632   33104 pod_ready.go:93] pod "kube-proxy-p76v9" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.900654   33104 pod_ready.go:82] duration metric: took 399.328849ms for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.900664   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.096686   33104 request.go:632] Waited for 195.950266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:43:13.096742   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:43:13.096747   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.096754   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.096758   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.099788   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.296662   33104 request.go:632] Waited for 196.364642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:13.296715   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:13.296720   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.296727   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.296730   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.299832   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.300287   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:13.300305   33104 pod_ready.go:82] duration metric: took 399.635674ms for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.300314   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.496503   33104 request.go:632] Waited for 196.090954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:43:13.496579   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:43:13.496587   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.496595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.496602   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.500814   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:13.697121   33104 request.go:632] Waited for 195.399465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:13.697197   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:13.697205   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.697216   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.697223   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.700589   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.701018   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:13.701040   33104 pod_ready.go:82] duration metric: took 400.71901ms for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.701054   33104 pod_ready.go:39] duration metric: took 3.200329427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:43:13.701073   33104 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:43:13.701127   33104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:43:13.716701   33104 api_server.go:72] duration metric: took 21.080586953s to wait for apiserver process to appear ...
	I0927 17:43:13.716724   33104 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:43:13.716745   33104 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0927 17:43:13.721063   33104 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0927 17:43:13.721136   33104 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0927 17:43:13.721142   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.721150   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.721159   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.722231   33104 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 17:43:13.722325   33104 api_server.go:141] control plane version: v1.31.1
	I0927 17:43:13.722340   33104 api_server.go:131] duration metric: took 5.610564ms to wait for apiserver health ...
	I0927 17:43:13.722347   33104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:43:13.896697   33104 request.go:632] Waited for 174.282639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:13.896775   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:13.896782   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.896793   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.896800   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.901747   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:13.907225   33104 system_pods.go:59] 17 kube-system pods found
	I0927 17:43:13.907254   33104 system_pods.go:61] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:43:13.907259   33104 system_pods.go:61] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:43:13.907264   33104 system_pods.go:61] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:43:13.907268   33104 system_pods.go:61] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:43:13.907271   33104 system_pods.go:61] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:43:13.907274   33104 system_pods.go:61] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:43:13.907278   33104 system_pods.go:61] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:43:13.907282   33104 system_pods.go:61] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:43:13.907285   33104 system_pods.go:61] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:43:13.907288   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:43:13.907293   33104 system_pods.go:61] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:43:13.907296   33104 system_pods.go:61] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:43:13.907302   33104 system_pods.go:61] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:43:13.907305   33104 system_pods.go:61] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:43:13.907308   33104 system_pods.go:61] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:43:13.907311   33104 system_pods.go:61] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:43:13.907314   33104 system_pods.go:61] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:43:13.907321   33104 system_pods.go:74] duration metric: took 184.96747ms to wait for pod list to return data ...
	I0927 17:43:13.907331   33104 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:43:14.096832   33104 request.go:632] Waited for 189.427057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:43:14.096891   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:43:14.096897   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.096905   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.096909   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.100749   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:14.101009   33104 default_sa.go:45] found service account: "default"
	I0927 17:43:14.101029   33104 default_sa.go:55] duration metric: took 193.692837ms for default service account to be created ...
	I0927 17:43:14.101037   33104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:43:14.296482   33104 request.go:632] Waited for 195.378336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:14.296581   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:14.296592   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.296603   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.296611   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.300663   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:14.305343   33104 system_pods.go:86] 17 kube-system pods found
	I0927 17:43:14.305387   33104 system_pods.go:89] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:43:14.305393   33104 system_pods.go:89] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:43:14.305397   33104 system_pods.go:89] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:43:14.305401   33104 system_pods.go:89] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:43:14.305405   33104 system_pods.go:89] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:43:14.305410   33104 system_pods.go:89] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:43:14.305415   33104 system_pods.go:89] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:43:14.305419   33104 system_pods.go:89] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:43:14.305423   33104 system_pods.go:89] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:43:14.305427   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:43:14.305435   33104 system_pods.go:89] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:43:14.305438   33104 system_pods.go:89] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:43:14.305442   33104 system_pods.go:89] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:43:14.305446   33104 system_pods.go:89] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:43:14.305450   33104 system_pods.go:89] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:43:14.305454   33104 system_pods.go:89] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:43:14.305457   33104 system_pods.go:89] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:43:14.305464   33104 system_pods.go:126] duration metric: took 204.421896ms to wait for k8s-apps to be running ...
	I0927 17:43:14.305470   33104 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:43:14.305515   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:43:14.319602   33104 system_svc.go:56] duration metric: took 14.121235ms WaitForService to wait for kubelet
	I0927 17:43:14.319638   33104 kubeadm.go:582] duration metric: took 21.683524227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:43:14.319663   33104 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:43:14.497069   33104 request.go:632] Waited for 177.328804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0927 17:43:14.497147   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0927 17:43:14.497154   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.497163   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.497168   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.500866   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:14.501573   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:43:14.501596   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:43:14.501610   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:43:14.501614   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:43:14.501620   33104 node_conditions.go:105] duration metric: took 181.9516ms to run NodePressure ...
	I0927 17:43:14.501634   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:43:14.501664   33104 start.go:255] writing updated cluster config ...
	I0927 17:43:14.503659   33104 out.go:201] 
	I0927 17:43:14.505222   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:14.505350   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:14.506867   33104 out.go:177] * Starting "ha-748477-m03" control-plane node in "ha-748477" cluster
	I0927 17:43:14.508071   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:43:14.508097   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:43:14.508199   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:43:14.508212   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:43:14.508319   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:14.508514   33104 start.go:360] acquireMachinesLock for ha-748477-m03: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:43:14.508582   33104 start.go:364] duration metric: took 33.744µs to acquireMachinesLock for "ha-748477-m03"
	I0927 17:43:14.508607   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:43:14.508723   33104 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0927 17:43:14.510363   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:43:14.510454   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:14.510494   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:14.525333   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0927 17:43:14.525777   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:14.526245   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:14.526298   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:14.526634   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:14.526863   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:14.527027   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:14.527179   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:43:14.527207   33104 client.go:168] LocalClient.Create starting
	I0927 17:43:14.527244   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:43:14.527283   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:43:14.527300   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:43:14.527373   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:43:14.527399   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:43:14.527413   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:43:14.527437   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:43:14.527447   33104 main.go:141] libmachine: (ha-748477-m03) Calling .PreCreateCheck
	I0927 17:43:14.527643   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:14.528097   33104 main.go:141] libmachine: Creating machine...
	I0927 17:43:14.528113   33104 main.go:141] libmachine: (ha-748477-m03) Calling .Create
	I0927 17:43:14.528262   33104 main.go:141] libmachine: (ha-748477-m03) Creating KVM machine...
	I0927 17:43:14.529473   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found existing default KVM network
	I0927 17:43:14.529581   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found existing private KVM network mk-ha-748477
	I0927 17:43:14.529722   33104 main.go:141] libmachine: (ha-748477-m03) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 ...
	I0927 17:43:14.529748   33104 main.go:141] libmachine: (ha-748477-m03) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:43:14.529795   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.529703   33861 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:43:14.529867   33104 main.go:141] libmachine: (ha-748477-m03) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:43:14.759285   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.759157   33861 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa...
	I0927 17:43:14.801359   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.801230   33861 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/ha-748477-m03.rawdisk...
	I0927 17:43:14.801398   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Writing magic tar header
	I0927 17:43:14.801441   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Writing SSH key tar header
	I0927 17:43:14.801464   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.801363   33861 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 ...
	I0927 17:43:14.801486   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03
	I0927 17:43:14.801542   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 (perms=drwx------)
	I0927 17:43:14.801588   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:43:14.801602   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:43:14.801611   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:43:14.801620   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:43:14.801631   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:43:14.801640   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:43:14.801647   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:43:14.801654   33104 main.go:141] libmachine: (ha-748477-m03) Creating domain...
	I0927 17:43:14.801662   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:43:14.801670   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:43:14.801678   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:43:14.801683   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home
	I0927 17:43:14.801690   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Skipping /home - not owner
	I0927 17:43:14.802911   33104 main.go:141] libmachine: (ha-748477-m03) define libvirt domain using xml: 
	I0927 17:43:14.802928   33104 main.go:141] libmachine: (ha-748477-m03) <domain type='kvm'>
	I0927 17:43:14.802938   33104 main.go:141] libmachine: (ha-748477-m03)   <name>ha-748477-m03</name>
	I0927 17:43:14.802946   33104 main.go:141] libmachine: (ha-748477-m03)   <memory unit='MiB'>2200</memory>
	I0927 17:43:14.802953   33104 main.go:141] libmachine: (ha-748477-m03)   <vcpu>2</vcpu>
	I0927 17:43:14.802962   33104 main.go:141] libmachine: (ha-748477-m03)   <features>
	I0927 17:43:14.802968   33104 main.go:141] libmachine: (ha-748477-m03)     <acpi/>
	I0927 17:43:14.802975   33104 main.go:141] libmachine: (ha-748477-m03)     <apic/>
	I0927 17:43:14.802985   33104 main.go:141] libmachine: (ha-748477-m03)     <pae/>
	I0927 17:43:14.802993   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803022   33104 main.go:141] libmachine: (ha-748477-m03)   </features>
	I0927 17:43:14.803039   33104 main.go:141] libmachine: (ha-748477-m03)   <cpu mode='host-passthrough'>
	I0927 17:43:14.803047   33104 main.go:141] libmachine: (ha-748477-m03)   
	I0927 17:43:14.803056   33104 main.go:141] libmachine: (ha-748477-m03)   </cpu>
	I0927 17:43:14.803062   33104 main.go:141] libmachine: (ha-748477-m03)   <os>
	I0927 17:43:14.803067   33104 main.go:141] libmachine: (ha-748477-m03)     <type>hvm</type>
	I0927 17:43:14.803073   33104 main.go:141] libmachine: (ha-748477-m03)     <boot dev='cdrom'/>
	I0927 17:43:14.803077   33104 main.go:141] libmachine: (ha-748477-m03)     <boot dev='hd'/>
	I0927 17:43:14.803084   33104 main.go:141] libmachine: (ha-748477-m03)     <bootmenu enable='no'/>
	I0927 17:43:14.803090   33104 main.go:141] libmachine: (ha-748477-m03)   </os>
	I0927 17:43:14.803095   33104 main.go:141] libmachine: (ha-748477-m03)   <devices>
	I0927 17:43:14.803102   33104 main.go:141] libmachine: (ha-748477-m03)     <disk type='file' device='cdrom'>
	I0927 17:43:14.803110   33104 main.go:141] libmachine: (ha-748477-m03)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/boot2docker.iso'/>
	I0927 17:43:14.803116   33104 main.go:141] libmachine: (ha-748477-m03)       <target dev='hdc' bus='scsi'/>
	I0927 17:43:14.803122   33104 main.go:141] libmachine: (ha-748477-m03)       <readonly/>
	I0927 17:43:14.803131   33104 main.go:141] libmachine: (ha-748477-m03)     </disk>
	I0927 17:43:14.803140   33104 main.go:141] libmachine: (ha-748477-m03)     <disk type='file' device='disk'>
	I0927 17:43:14.803152   33104 main.go:141] libmachine: (ha-748477-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:43:14.803173   33104 main.go:141] libmachine: (ha-748477-m03)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/ha-748477-m03.rawdisk'/>
	I0927 17:43:14.803187   33104 main.go:141] libmachine: (ha-748477-m03)       <target dev='hda' bus='virtio'/>
	I0927 17:43:14.803204   33104 main.go:141] libmachine: (ha-748477-m03)     </disk>
	I0927 17:43:14.803214   33104 main.go:141] libmachine: (ha-748477-m03)     <interface type='network'>
	I0927 17:43:14.803232   33104 main.go:141] libmachine: (ha-748477-m03)       <source network='mk-ha-748477'/>
	I0927 17:43:14.803250   33104 main.go:141] libmachine: (ha-748477-m03)       <model type='virtio'/>
	I0927 17:43:14.803301   33104 main.go:141] libmachine: (ha-748477-m03)     </interface>
	I0927 17:43:14.803324   33104 main.go:141] libmachine: (ha-748477-m03)     <interface type='network'>
	I0927 17:43:14.803338   33104 main.go:141] libmachine: (ha-748477-m03)       <source network='default'/>
	I0927 17:43:14.803347   33104 main.go:141] libmachine: (ha-748477-m03)       <model type='virtio'/>
	I0927 17:43:14.803356   33104 main.go:141] libmachine: (ha-748477-m03)     </interface>
	I0927 17:43:14.803366   33104 main.go:141] libmachine: (ha-748477-m03)     <serial type='pty'>
	I0927 17:43:14.803374   33104 main.go:141] libmachine: (ha-748477-m03)       <target port='0'/>
	I0927 17:43:14.803386   33104 main.go:141] libmachine: (ha-748477-m03)     </serial>
	I0927 17:43:14.803397   33104 main.go:141] libmachine: (ha-748477-m03)     <console type='pty'>
	I0927 17:43:14.803409   33104 main.go:141] libmachine: (ha-748477-m03)       <target type='serial' port='0'/>
	I0927 17:43:14.803420   33104 main.go:141] libmachine: (ha-748477-m03)     </console>
	I0927 17:43:14.803429   33104 main.go:141] libmachine: (ha-748477-m03)     <rng model='virtio'>
	I0927 17:43:14.803439   33104 main.go:141] libmachine: (ha-748477-m03)       <backend model='random'>/dev/random</backend>
	I0927 17:43:14.803448   33104 main.go:141] libmachine: (ha-748477-m03)     </rng>
	I0927 17:43:14.803456   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803464   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803470   33104 main.go:141] libmachine: (ha-748477-m03)   </devices>
	I0927 17:43:14.803478   33104 main.go:141] libmachine: (ha-748477-m03) </domain>
	I0927 17:43:14.803488   33104 main.go:141] libmachine: (ha-748477-m03) 
	I0927 17:43:14.809886   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:46:4f:8f in network default
	I0927 17:43:14.810424   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring networks are active...
	I0927 17:43:14.810447   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:14.811161   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring network default is active
	I0927 17:43:14.811552   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring network mk-ha-748477 is active
	I0927 17:43:14.811864   33104 main.go:141] libmachine: (ha-748477-m03) Getting domain xml...
	I0927 17:43:14.812640   33104 main.go:141] libmachine: (ha-748477-m03) Creating domain...
	I0927 17:43:16.061728   33104 main.go:141] libmachine: (ha-748477-m03) Waiting to get IP...
	I0927 17:43:16.062561   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.063038   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.063058   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.062985   33861 retry.go:31] will retry after 274.225477ms: waiting for machine to come up
	I0927 17:43:16.338624   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.339183   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.339208   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.339134   33861 retry.go:31] will retry after 249.930567ms: waiting for machine to come up
	I0927 17:43:16.590699   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.591137   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.591158   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.591098   33861 retry.go:31] will retry after 427.975523ms: waiting for machine to come up
	I0927 17:43:17.021029   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.021704   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.021792   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.021629   33861 retry.go:31] will retry after 377.570175ms: waiting for machine to come up
	I0927 17:43:17.401315   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.401764   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.401789   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.401730   33861 retry.go:31] will retry after 480.401499ms: waiting for machine to come up
	I0927 17:43:17.883333   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.883876   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.883904   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.883818   33861 retry.go:31] will retry after 806.335644ms: waiting for machine to come up
	I0927 17:43:18.691641   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:18.692132   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:18.692163   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:18.692063   33861 retry.go:31] will retry after 996.155949ms: waiting for machine to come up
	I0927 17:43:19.690169   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:19.690576   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:19.690600   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:19.690536   33861 retry.go:31] will retry after 1.280499747s: waiting for machine to come up
	I0927 17:43:20.972507   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:20.972924   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:20.972949   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:20.972873   33861 retry.go:31] will retry after 1.740341439s: waiting for machine to come up
	I0927 17:43:22.715948   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:22.716453   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:22.716480   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:22.716399   33861 retry.go:31] will retry after 2.220570146s: waiting for machine to come up
	I0927 17:43:24.939094   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:24.939777   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:24.939807   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:24.939729   33861 retry.go:31] will retry after 1.898000228s: waiting for machine to come up
	I0927 17:43:26.839799   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:26.840424   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:26.840450   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:26.840370   33861 retry.go:31] will retry after 3.204742412s: waiting for machine to come up
	I0927 17:43:30.046789   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:30.047236   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:30.047261   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:30.047187   33861 retry.go:31] will retry after 3.849840599s: waiting for machine to come up
	I0927 17:43:33.899866   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:33.900417   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:33.900443   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:33.900384   33861 retry.go:31] will retry after 4.029402489s: waiting for machine to come up
	I0927 17:43:37.931866   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:37.932267   33104 main.go:141] libmachine: (ha-748477-m03) Found IP for machine: 192.168.39.225
	I0927 17:43:37.932289   33104 main.go:141] libmachine: (ha-748477-m03) Reserving static IP address...
	I0927 17:43:37.932301   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has current primary IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:37.932706   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find host DHCP lease matching {name: "ha-748477-m03", mac: "52:54:00:bf:59:33", ip: "192.168.39.225"} in network mk-ha-748477
	I0927 17:43:38.014671   33104 main.go:141] libmachine: (ha-748477-m03) Reserved static IP address: 192.168.39.225
	I0927 17:43:38.014703   33104 main.go:141] libmachine: (ha-748477-m03) Waiting for SSH to be available...
	I0927 17:43:38.014712   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Getting to WaitForSSH function...
	I0927 17:43:38.017503   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.018016   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.018038   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.018293   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using SSH client type: external
	I0927 17:43:38.018324   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa (-rw-------)
	I0927 17:43:38.018358   33104 main.go:141] libmachine: (ha-748477-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:43:38.018375   33104 main.go:141] libmachine: (ha-748477-m03) DBG | About to run SSH command:
	I0927 17:43:38.018391   33104 main.go:141] libmachine: (ha-748477-m03) DBG | exit 0
	I0927 17:43:38.146846   33104 main.go:141] libmachine: (ha-748477-m03) DBG | SSH cmd err, output: <nil>: 
	I0927 17:43:38.147182   33104 main.go:141] libmachine: (ha-748477-m03) KVM machine creation complete!
	I0927 17:43:38.147465   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:38.148028   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:38.148248   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:38.148515   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:43:38.148529   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetState
	I0927 17:43:38.150026   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:43:38.150038   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:43:38.150043   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:43:38.150053   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.152279   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.152703   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.152731   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.152930   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.153090   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.153241   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.153385   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.153555   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.153754   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.153768   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:43:38.265876   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:43:38.265897   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:43:38.265904   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.268621   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.269047   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.269076   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.269294   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.269526   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.269745   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.269874   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.270033   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.270230   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.270243   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:43:38.383161   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:43:38.383229   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:43:38.383244   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:43:38.383259   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.383511   33104 buildroot.go:166] provisioning hostname "ha-748477-m03"
	I0927 17:43:38.383534   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.383702   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.386560   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.386936   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.386960   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.387130   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.387316   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.387515   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.387694   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.387876   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.388053   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.388066   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477-m03 && echo "ha-748477-m03" | sudo tee /etc/hostname
	I0927 17:43:38.517221   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477-m03
	
	I0927 17:43:38.517257   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.520130   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.520637   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.520668   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.520845   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.521018   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.521146   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.521319   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.521531   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.521692   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.521708   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:43:38.647377   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:43:38.647402   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:43:38.647415   33104 buildroot.go:174] setting up certificates
	I0927 17:43:38.647425   33104 provision.go:84] configureAuth start
	I0927 17:43:38.647433   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.647695   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:38.650891   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.651352   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.651376   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.651507   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.653842   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.654158   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.654175   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.654290   33104 provision.go:143] copyHostCerts
	I0927 17:43:38.654319   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:43:38.654364   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:43:38.654376   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:43:38.654459   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:43:38.654546   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:43:38.654572   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:43:38.654581   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:43:38.654616   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:43:38.654702   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:43:38.654726   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:43:38.654735   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:43:38.654768   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:43:38.654847   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477-m03 san=[127.0.0.1 192.168.39.225 ha-748477-m03 localhost minikube]
	I0927 17:43:38.750947   33104 provision.go:177] copyRemoteCerts
	I0927 17:43:38.751001   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:43:38.751023   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.753961   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.754344   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.754372   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.754619   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.754798   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.754987   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.755087   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:38.840538   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:43:38.840622   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:43:38.865467   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:43:38.865545   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:43:38.889287   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:43:38.889354   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 17:43:38.913853   33104 provision.go:87] duration metric: took 266.415768ms to configureAuth
	I0927 17:43:38.913886   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:43:38.914119   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:38.914188   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.916953   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.917343   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.917389   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.917634   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.917835   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.918007   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.918197   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.918414   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.918567   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.918582   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:43:39.149801   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:43:39.149830   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:43:39.149841   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetURL
	I0927 17:43:39.151338   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using libvirt version 6000000
	I0927 17:43:39.154047   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.154538   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.154584   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.154757   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:43:39.154780   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:43:39.154790   33104 client.go:171] duration metric: took 24.627572253s to LocalClient.Create
	I0927 17:43:39.154853   33104 start.go:167] duration metric: took 24.627635604s to libmachine.API.Create "ha-748477"
	I0927 17:43:39.154866   33104 start.go:293] postStartSetup for "ha-748477-m03" (driver="kvm2")
	I0927 17:43:39.154874   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:43:39.154890   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.155121   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:43:39.155148   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.157417   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.157783   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.157810   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.157968   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.158151   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.158328   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.158514   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.245650   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:43:39.250017   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:43:39.250039   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:43:39.250125   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:43:39.250232   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:43:39.250246   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:43:39.250349   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:43:39.261588   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:43:39.287333   33104 start.go:296] duration metric: took 132.452339ms for postStartSetup
	I0927 17:43:39.287401   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:39.288010   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:39.291082   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.291501   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.291531   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.291849   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:39.292090   33104 start.go:128] duration metric: took 24.783356022s to createHost
	I0927 17:43:39.292116   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.294390   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.294793   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.294820   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.294965   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.295132   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.295273   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.295377   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.295501   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:39.295656   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:39.295666   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:43:39.411619   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727459019.389020724
	
	I0927 17:43:39.411648   33104 fix.go:216] guest clock: 1727459019.389020724
	I0927 17:43:39.411657   33104 fix.go:229] Guest: 2024-09-27 17:43:39.389020724 +0000 UTC Remote: 2024-09-27 17:43:39.292103608 +0000 UTC m=+148.045500714 (delta=96.917116ms)
	I0927 17:43:39.411678   33104 fix.go:200] guest clock delta is within tolerance: 96.917116ms
	I0927 17:43:39.411685   33104 start.go:83] releasing machines lock for "ha-748477-m03", held for 24.903091459s
	I0927 17:43:39.411706   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.411995   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:39.415530   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.415971   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.416001   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.418411   33104 out.go:177] * Found network options:
	I0927 17:43:39.419695   33104 out.go:177]   - NO_PROXY=192.168.39.217,192.168.39.58
	W0927 17:43:39.421098   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 17:43:39.421127   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:43:39.421146   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.421784   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.421985   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.422065   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:43:39.422102   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	W0927 17:43:39.422186   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 17:43:39.422213   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:43:39.422273   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:43:39.422290   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.425046   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425070   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425405   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.425433   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425459   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.425473   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425650   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.425656   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.425841   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.425845   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.425989   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.426058   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.426122   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.426163   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.669795   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:43:39.677634   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:43:39.677716   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:43:39.695349   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:43:39.695382   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:43:39.695446   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:43:39.715092   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:43:39.728101   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:43:39.728166   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:43:39.743124   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:43:39.759724   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:43:39.876420   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:43:40.024261   33104 docker.go:233] disabling docker service ...
	I0927 17:43:40.024330   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:43:40.038245   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:43:40.051565   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:43:40.182718   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:43:40.288143   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:43:40.301741   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:43:40.319929   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:43:40.319996   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.330123   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:43:40.330196   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.340177   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.350053   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.359649   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:43:40.370207   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.380395   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.396915   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.407460   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:43:40.418005   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:43:40.418063   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:43:40.432276   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:43:40.441789   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:43:40.568411   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:43:40.662140   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:43:40.662238   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:43:40.666515   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:43:40.666579   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:43:40.670183   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:43:40.717483   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:43:40.717566   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:43:40.748394   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:43:40.780693   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:43:40.782171   33104 out.go:177]   - env NO_PROXY=192.168.39.217
	I0927 17:43:40.783616   33104 out.go:177]   - env NO_PROXY=192.168.39.217,192.168.39.58
	I0927 17:43:40.784733   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:40.787731   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:40.788217   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:40.788253   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:40.788539   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:43:40.792731   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:43:40.806447   33104 mustload.go:65] Loading cluster: ha-748477
	I0927 17:43:40.806781   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:40.807166   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:40.807212   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:40.822513   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0927 17:43:40.823010   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:40.823465   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:40.823485   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:40.823815   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:40.824022   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:43:40.825639   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:43:40.826053   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:40.826124   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:40.841477   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I0927 17:43:40.841930   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:40.842426   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:40.842447   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:40.842805   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:40.843010   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:43:40.843186   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.225
	I0927 17:43:40.843200   33104 certs.go:194] generating shared ca certs ...
	I0927 17:43:40.843218   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:40.843371   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:43:40.843411   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:43:40.843417   33104 certs.go:256] generating profile certs ...
	I0927 17:43:40.843480   33104 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:43:40.843503   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9
	I0927 17:43:40.843516   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.225 192.168.39.254]
	I0927 17:43:41.042816   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 ...
	I0927 17:43:41.042845   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9: {Name:mkb90c985fb1d25421e8db77e70e31dc9e70f7a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:41.043004   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9 ...
	I0927 17:43:41.043015   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9: {Name:mk8a7a00dfda8086d770b62e0a97735d5734e23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:41.043080   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:43:41.043215   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:43:41.043337   33104 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:43:41.043351   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:43:41.043364   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:43:41.043379   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:43:41.043391   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:43:41.043404   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:43:41.043417   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:43:41.043428   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:43:41.066805   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:43:41.066895   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:43:41.066928   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:43:41.066939   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:43:41.066959   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:43:41.066982   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:43:41.067004   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:43:41.067043   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:43:41.067080   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.067101   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.067118   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.067151   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:43:41.070167   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:41.070759   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:43:41.070790   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:41.071003   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:43:41.071223   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:43:41.071385   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:43:41.071558   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:43:41.147059   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 17:43:41.152408   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 17:43:41.164540   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 17:43:41.168851   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 17:43:41.179537   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 17:43:41.183316   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 17:43:41.193077   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 17:43:41.197075   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0927 17:43:41.207804   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 17:43:41.211696   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 17:43:41.221742   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 17:43:41.225610   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 17:43:41.235977   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:43:41.260849   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:43:41.285062   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:43:41.309713   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:43:41.332498   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0927 17:43:41.356394   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 17:43:41.380266   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:43:41.404334   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:43:41.432122   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:43:41.455867   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:43:41.479143   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:43:41.501633   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 17:43:41.518790   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 17:43:41.534928   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 17:43:41.551854   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0927 17:43:41.568140   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 17:43:41.584545   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 17:43:41.600656   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 17:43:41.616675   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:43:41.622211   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:43:41.632889   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.637255   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.637327   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.642842   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:43:41.653070   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:43:41.663785   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.668204   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.668272   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.673573   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:43:41.686375   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:43:41.697269   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.702234   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.702308   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.707933   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:43:41.719033   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:43:41.723054   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:43:41.723112   33104 kubeadm.go:934] updating node {m03 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0927 17:43:41.723208   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:43:41.723244   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:43:41.723291   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:43:41.741075   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:43:41.741157   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:43:41.741232   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:43:41.751232   33104 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 17:43:41.751324   33104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 17:43:41.760899   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 17:43:41.760908   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 17:43:41.760931   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:43:41.760912   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 17:43:41.760955   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:43:41.760999   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:43:41.761007   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:43:41.761019   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:43:41.775995   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:43:41.776050   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 17:43:41.776070   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 17:43:41.776102   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:43:41.776118   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 17:43:41.776149   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 17:43:41.807089   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 17:43:41.807127   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 17:43:42.630057   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 17:43:42.639770   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 17:43:42.656295   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:43:42.672793   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:43:42.690976   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:43:42.694501   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:43:42.706939   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:43:42.822795   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:43:42.839249   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:43:42.839706   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:42.839761   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:42.856985   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0927 17:43:42.857497   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:42.858071   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:42.858097   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:42.858483   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:42.858728   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:43:42.858882   33104 start.go:317] joinCluster: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:43:42.858996   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 17:43:42.859017   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:43:42.862454   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:42.862936   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:43:42.862961   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:42.863106   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:43:42.863242   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:43:42.863373   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:43:42.863511   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:43:43.018533   33104 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:43:43.018576   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gg5wlb.ttkule5dhfsmakjt --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m03 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I0927 17:44:05.879368   33104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gg5wlb.ttkule5dhfsmakjt --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m03 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (22.860766617s)
	I0927 17:44:05.879405   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 17:44:06.450456   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477-m03 minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=false
	I0927 17:44:06.570812   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-748477-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 17:44:06.695756   33104 start.go:319] duration metric: took 23.836880106s to joinCluster
	I0927 17:44:06.695831   33104 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:44:06.696168   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:44:06.698664   33104 out.go:177] * Verifying Kubernetes components...
	I0927 17:44:06.700038   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:44:06.966281   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:44:06.988180   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:44:06.988494   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 17:44:06.988564   33104 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0927 17:44:06.988753   33104 node_ready.go:35] waiting up to 6m0s for node "ha-748477-m03" to be "Ready" ...
	I0927 17:44:06.988830   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:06.988838   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:06.988846   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:06.988849   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:06.992308   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:07.488982   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:07.489008   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:07.489020   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:07.489027   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:07.492583   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:07.988968   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:07.988994   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:07.989004   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:07.989011   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:07.993492   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:08.489684   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:08.489716   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:08.489726   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:08.489733   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:08.492856   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:08.989902   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:08.989923   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:08.989931   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:08.989937   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:08.994357   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:08.995455   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:09.489815   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:09.489842   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:09.489854   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:09.489860   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:09.493739   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:09.989180   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:09.989203   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:09.989211   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:09.989215   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:09.993543   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:10.489209   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:10.489234   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:10.489246   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:10.489253   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:10.492922   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:10.989208   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:10.989240   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:10.989251   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:10.989256   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:10.992477   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:11.489265   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:11.489287   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:11.489296   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:11.489304   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:11.492474   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:11.492926   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:11.989355   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:11.989380   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:11.989390   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:11.989394   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:11.992835   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:12.489471   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:12.489492   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:12.489500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:12.489504   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:12.493061   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:12.989541   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:12.989567   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:12.989575   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:12.989579   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:12.992728   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:13.489760   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:13.489793   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:13.489806   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:13.489812   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:13.497872   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:44:13.498431   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:13.989853   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:13.989880   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:13.989891   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:13.989897   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:13.993174   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:14.489807   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:14.489829   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:14.489837   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:14.489841   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:14.492717   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:14.989051   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:14.989078   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:14.989086   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:14.989090   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:14.992500   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.489879   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:15.489902   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:15.489912   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:15.489917   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:15.493620   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.989863   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:15.989886   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:15.989894   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:15.989898   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:15.993642   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.994205   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:16.489216   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:16.489238   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:16.489246   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:16.489251   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:16.492886   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:16.989910   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:16.989931   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:16.989940   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:16.989945   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:16.993350   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:17.489239   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:17.489263   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:17.489272   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:17.489276   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:17.492577   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:17.989223   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:17.989270   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:17.989278   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:17.989284   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:17.992505   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:18.489403   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:18.489430   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:18.489443   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:18.489449   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:18.492511   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:18.493206   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:18.989479   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:18.989510   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:18.989519   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:18.989524   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:18.992918   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:19.489608   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:19.489633   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:19.489641   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:19.489646   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:19.493022   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:19.989818   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:19.989842   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:19.989850   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:19.989853   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:19.993975   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:20.489504   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:20.489533   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:20.489542   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:20.489546   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:20.492731   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:20.493288   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:20.988966   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:20.988991   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:20.989000   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:20.989003   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:20.992757   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:21.489625   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:21.489646   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:21.489657   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:21.489662   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:21.493197   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:21.988951   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:21.988974   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:21.988982   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:21.988986   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:21.992564   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:22.489223   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:22.489254   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:22.489262   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:22.489270   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:22.492275   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:22.989460   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:22.989483   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:22.989493   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:22.989502   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:22.992826   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:22.993315   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:23.489736   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:23.489756   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.489764   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.489768   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.495068   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:23.989320   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:23.989345   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.989356   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.989363   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.992950   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:23.993381   33104 node_ready.go:49] node "ha-748477-m03" has status "Ready":"True"
	I0927 17:44:23.993400   33104 node_ready.go:38] duration metric: took 17.004633158s for node "ha-748477-m03" to be "Ready" ...
	I0927 17:44:23.993411   33104 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:44:23.993477   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:23.993489   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.993500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.993509   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.999279   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:24.006063   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.006162   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n99lr
	I0927 17:44:24.006171   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.006185   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.006194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.009676   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.010413   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.010431   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.010440   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.010444   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.013067   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.013609   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.013634   33104 pod_ready.go:82] duration metric: took 7.540949ms for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.013648   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.013707   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qvp2z
	I0927 17:44:24.013715   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.013723   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.013734   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.016476   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.017040   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.017054   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.017061   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.017064   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.019465   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.020063   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.020102   33104 pod_ready.go:82] duration metric: took 6.431397ms for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.020111   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.020159   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477
	I0927 17:44:24.020167   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.020173   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.020177   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.022709   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.023386   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.023403   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.023413   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.023418   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.025863   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.026254   33104 pod_ready.go:93] pod "etcd-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.026275   33104 pod_ready.go:82] duration metric: took 6.154043ms for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.026285   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.026339   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m02
	I0927 17:44:24.026349   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.026358   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.026367   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.028864   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.029549   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:24.029570   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.029581   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.029587   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.032020   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.032371   33104 pod_ready.go:93] pod "etcd-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.032386   33104 pod_ready.go:82] duration metric: took 6.091988ms for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.032394   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.189823   33104 request.go:632] Waited for 157.37468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m03
	I0927 17:44:24.189892   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m03
	I0927 17:44:24.189897   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.189904   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.189908   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.193136   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.390201   33104 request.go:632] Waited for 196.372402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:24.390286   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:24.390297   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.390308   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.390313   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.393762   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.394363   33104 pod_ready.go:93] pod "etcd-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.394381   33104 pod_ready.go:82] duration metric: took 361.981746ms for pod "etcd-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.394396   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.589922   33104 request.go:632] Waited for 195.447053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:44:24.589977   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:44:24.589984   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.589994   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.590003   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.595149   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:24.790340   33104 request.go:632] Waited for 194.372172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.790393   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.790398   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.790405   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.790410   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.794157   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.794854   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.794872   33104 pod_ready.go:82] duration metric: took 400.469945ms for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.794884   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.990005   33104 request.go:632] Waited for 195.038611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:44:24.990097   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:44:24.990106   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.990114   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.990120   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.993651   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.189611   33104 request.go:632] Waited for 195.314442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:25.189675   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:25.189682   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.189692   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.189702   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.192900   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.193483   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.193499   33104 pod_ready.go:82] duration metric: took 398.608065ms for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.193510   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.389697   33104 request.go:632] Waited for 196.11571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m03
	I0927 17:44:25.389767   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m03
	I0927 17:44:25.389774   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.389785   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.389793   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.393037   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.590215   33104 request.go:632] Waited for 196.404084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:25.590294   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:25.590304   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.590312   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.590316   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.593767   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.594384   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.594405   33104 pod_ready.go:82] duration metric: took 400.885974ms for pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.594417   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.789682   33104 request.go:632] Waited for 195.173744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:44:25.789750   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:44:25.789763   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.789771   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.789780   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.793195   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.990184   33104 request.go:632] Waited for 196.372393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:25.990247   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:25.990253   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.990260   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.990263   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.993519   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.994033   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.994056   33104 pod_ready.go:82] duration metric: took 399.631199ms for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.994070   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.190045   33104 request.go:632] Waited for 195.907906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:44:26.190131   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:44:26.190138   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.190151   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.190160   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.193660   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.389361   33104 request.go:632] Waited for 195.017885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:26.389417   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:26.389421   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.389428   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.389431   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.392564   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.393105   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:26.393124   33104 pod_ready.go:82] duration metric: took 399.046825ms for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.393133   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.589483   33104 request.go:632] Waited for 196.270592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m03
	I0927 17:44:26.589536   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m03
	I0927 17:44:26.589540   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.589548   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.589552   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.592906   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.789895   33104 request.go:632] Waited for 196.382825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:26.789947   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:26.789952   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.789961   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.789964   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.793463   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.793873   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:26.793891   33104 pod_ready.go:82] duration metric: took 400.752393ms for pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.793901   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.989945   33104 request.go:632] Waited for 195.982437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:44:26.990000   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:44:26.990005   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.990031   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.990035   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.993238   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.190379   33104 request.go:632] Waited for 196.39365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:27.190481   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:27.190488   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.190500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.190506   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.194446   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.195047   33104 pod_ready.go:93] pod "kube-proxy-kxwmh" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.195067   33104 pod_ready.go:82] duration metric: took 401.160768ms for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.195076   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.390020   33104 request.go:632] Waited for 194.886629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:44:27.390100   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:44:27.390108   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.390118   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.390144   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.393971   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.590100   33104 request.go:632] Waited for 195.421674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:27.590160   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:27.590166   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.590174   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.590180   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.593717   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.594167   33104 pod_ready.go:93] pod "kube-proxy-p76v9" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.594184   33104 pod_ready.go:82] duration metric: took 399.103012ms for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.594193   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vwkqb" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.790210   33104 request.go:632] Waited for 195.943653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vwkqb
	I0927 17:44:27.790293   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vwkqb
	I0927 17:44:27.790300   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.790312   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.790320   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.793922   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.989848   33104 request.go:632] Waited for 194.791805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:27.989907   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:27.989914   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.989923   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.989939   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.993415   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.993925   33104 pod_ready.go:93] pod "kube-proxy-vwkqb" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.993944   33104 pod_ready.go:82] duration metric: took 399.743885ms for pod "kube-proxy-vwkqb" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.993955   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.190067   33104 request.go:632] Waited for 196.037102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:44:28.190120   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:44:28.190126   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.190133   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.190138   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.193549   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.389329   33104 request.go:632] Waited for 195.18973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:28.389427   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:28.389436   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.389447   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.389459   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.392869   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.393523   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:28.393543   33104 pod_ready.go:82] duration metric: took 399.580493ms for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.393553   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.589680   33104 request.go:632] Waited for 196.059502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:44:28.589758   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:44:28.589766   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.589798   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.589812   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.593515   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.789392   33104 request.go:632] Waited for 195.298123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:28.789503   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:28.789516   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.789528   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.789539   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.792681   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.793229   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:28.793254   33104 pod_ready.go:82] duration metric: took 399.693783ms for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.793277   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.990199   33104 request.go:632] Waited for 196.858043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m03
	I0927 17:44:28.990266   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m03
	I0927 17:44:28.990272   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.990278   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.990283   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.993839   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.189981   33104 request.go:632] Waited for 195.403888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:29.190077   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:29.190088   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.190096   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.190103   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.193637   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.194214   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:29.194235   33104 pod_ready.go:82] duration metric: took 400.951036ms for pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:29.194250   33104 pod_ready.go:39] duration metric: took 5.200829097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:44:29.194265   33104 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:44:29.194320   33104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:44:29.209103   33104 api_server.go:72] duration metric: took 22.513227302s to wait for apiserver process to appear ...
	I0927 17:44:29.209147   33104 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:44:29.209171   33104 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0927 17:44:29.213508   33104 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0927 17:44:29.213572   33104 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0927 17:44:29.213579   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.213589   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.213599   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.214754   33104 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 17:44:29.214825   33104 api_server.go:141] control plane version: v1.31.1
	I0927 17:44:29.214842   33104 api_server.go:131] duration metric: took 5.68685ms to wait for apiserver health ...
	I0927 17:44:29.214854   33104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:44:29.390318   33104 request.go:632] Waited for 175.371088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.390382   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.390388   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.390394   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.390400   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.396973   33104 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0927 17:44:29.403737   33104 system_pods.go:59] 24 kube-system pods found
	I0927 17:44:29.403771   33104 system_pods.go:61] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:44:29.403776   33104 system_pods.go:61] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:44:29.403780   33104 system_pods.go:61] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:44:29.403784   33104 system_pods.go:61] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:44:29.403787   33104 system_pods.go:61] "etcd-ha-748477-m03" [cd23c252-4153-4ed3-900a-ec3ec23a0b8a] Running
	I0927 17:44:29.403790   33104 system_pods.go:61] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:44:29.403794   33104 system_pods.go:61] "kindnet-66lb8" [613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba] Running
	I0927 17:44:29.403796   33104 system_pods.go:61] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:44:29.403800   33104 system_pods.go:61] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:44:29.403806   33104 system_pods.go:61] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:44:29.403810   33104 system_pods.go:61] "kube-apiserver-ha-748477-m03" [1ca56580-06a0-4c17-bfbf-fd04ca381250] Running
	I0927 17:44:29.403814   33104 system_pods.go:61] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:44:29.403818   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:44:29.403823   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m03" [db69354c-c220-4f2a-b350-ce715009dfea] Running
	I0927 17:44:29.403827   33104 system_pods.go:61] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:44:29.403830   33104 system_pods.go:61] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:44:29.403833   33104 system_pods.go:61] "kube-proxy-vwkqb" [cee9a1cd-cce3-4e30-8bbe-1597f7ff4277] Running
	I0927 17:44:29.403836   33104 system_pods.go:61] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:44:29.403839   33104 system_pods.go:61] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:44:29.403841   33104 system_pods.go:61] "kube-scheduler-ha-748477-m03" [e9b04f8f-f820-455c-b70c-103a54bf9944] Running
	I0927 17:44:29.403845   33104 system_pods.go:61] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:44:29.403847   33104 system_pods.go:61] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:44:29.403851   33104 system_pods.go:61] "kube-vip-ha-748477-m03" [5f5c717e-5d86-4b0b-bd34-b4f8eb1f8eca] Running
	I0927 17:44:29.403853   33104 system_pods.go:61] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:44:29.403859   33104 system_pods.go:74] duration metric: took 188.99624ms to wait for pod list to return data ...
	I0927 17:44:29.403865   33104 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:44:29.590098   33104 request.go:632] Waited for 186.16112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:44:29.590155   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:44:29.590162   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.590171   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.590178   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.593809   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.593933   33104 default_sa.go:45] found service account: "default"
	I0927 17:44:29.593953   33104 default_sa.go:55] duration metric: took 190.081669ms for default service account to be created ...
	I0927 17:44:29.593963   33104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:44:29.790359   33104 request.go:632] Waited for 196.323191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.790417   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.790423   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.790430   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.790435   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.798546   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:44:29.805235   33104 system_pods.go:86] 24 kube-system pods found
	I0927 17:44:29.805269   33104 system_pods.go:89] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:44:29.805277   33104 system_pods.go:89] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:44:29.805283   33104 system_pods.go:89] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:44:29.805288   33104 system_pods.go:89] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:44:29.805293   33104 system_pods.go:89] "etcd-ha-748477-m03" [cd23c252-4153-4ed3-900a-ec3ec23a0b8a] Running
	I0927 17:44:29.805299   33104 system_pods.go:89] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:44:29.805304   33104 system_pods.go:89] "kindnet-66lb8" [613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba] Running
	I0927 17:44:29.805309   33104 system_pods.go:89] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:44:29.805315   33104 system_pods.go:89] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:44:29.805321   33104 system_pods.go:89] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:44:29.805328   33104 system_pods.go:89] "kube-apiserver-ha-748477-m03" [1ca56580-06a0-4c17-bfbf-fd04ca381250] Running
	I0927 17:44:29.805337   33104 system_pods.go:89] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:44:29.805352   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:44:29.805358   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m03" [db69354c-c220-4f2a-b350-ce715009dfea] Running
	I0927 17:44:29.805364   33104 system_pods.go:89] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:44:29.805371   33104 system_pods.go:89] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:44:29.805379   33104 system_pods.go:89] "kube-proxy-vwkqb" [cee9a1cd-cce3-4e30-8bbe-1597f7ff4277] Running
	I0927 17:44:29.805386   33104 system_pods.go:89] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:44:29.805394   33104 system_pods.go:89] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:44:29.805400   33104 system_pods.go:89] "kube-scheduler-ha-748477-m03" [e9b04f8f-f820-455c-b70c-103a54bf9944] Running
	I0927 17:44:29.805408   33104 system_pods.go:89] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:44:29.805414   33104 system_pods.go:89] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:44:29.805421   33104 system_pods.go:89] "kube-vip-ha-748477-m03" [5f5c717e-5d86-4b0b-bd34-b4f8eb1f8eca] Running
	I0927 17:44:29.805427   33104 system_pods.go:89] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:44:29.805437   33104 system_pods.go:126] duration metric: took 211.464032ms to wait for k8s-apps to be running ...
	I0927 17:44:29.805449   33104 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:44:29.805501   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:44:29.820712   33104 system_svc.go:56] duration metric: took 15.24207ms WaitForService to wait for kubelet
	I0927 17:44:29.820739   33104 kubeadm.go:582] duration metric: took 23.124868861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:44:29.820756   33104 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:44:29.990257   33104 request.go:632] Waited for 169.421001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0927 17:44:29.990309   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0927 17:44:29.990315   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.990322   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.990328   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.994594   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:29.995485   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995514   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995525   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995529   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995532   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995536   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995540   33104 node_conditions.go:105] duration metric: took 174.779797ms to run NodePressure ...
	I0927 17:44:29.995551   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:44:29.995569   33104 start.go:255] writing updated cluster config ...
	I0927 17:44:29.995843   33104 ssh_runner.go:195] Run: rm -f paused
	I0927 17:44:30.046784   33104 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 17:44:30.049020   33104 out.go:177] * Done! kubectl is now configured to use "ha-748477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.915586004Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-j7gsn,Uid:07233d33-34ed-44e8-a9d5-376e1860ca0c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727459071385161427,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:44:31.057407872Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8b5a708d-128c-492d-bff2-7efbfcc9396f,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1727458932902449667,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T17:42:12.573218348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qvp2z,Uid:61b875d4-dda7-465c-aff9-49e2eb8f5f9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458932879699150,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:12.569958449Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-n99lr,Uid:ec2d5b00-2422-4e07-a352-a47254a81408,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727458932878513965,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:12.563003994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&PodSandboxMetadata{Name:kindnet-5wl4m,Uid:fc7f8df5-02d8-4ad5-a8e8-127335b9d228,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458920706274387,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:00.387399998Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&PodSandboxMetadata{Name:kube-proxy-p76v9,Uid:1ebfb1c9-64bb-47d1-962d-49573740e503,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458920672097797,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:00.357582877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-748477,Uid:b14aea5a97dfd5a2488f6e3ced308879,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1727458909026459903,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: b14aea5a97dfd5a2488f6e3ced308879,kubernetes.io/config.seen: 2024-09-27T17:41:48.537214929Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-748477,Uid:647e1f1a223aa05c0d6b5b0aa1c461da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458909007338821,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 647e1f1a223aa05c0d6b5b0aa1c461da,kubernetes.io/config.seen: 2024-09-27T17:41:48.537216051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-748477,Uid:6ca1e1a0b5ef88fb0f62da990054eb17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458909006052534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{kubernetes.io/config.hash: 6ca1e1a0b5ef88fb0f62da990054eb17,kubernetes.io/config.seen: 2024-09-27T17:41:48.537217513Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f25008a681435c386989bc22da79780f9d2c52dfc
2ee4bd1d34f0366069ed9fe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-748477,Uid:e6983c6d4e8a67eea6f4983292eca43a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458909005424738,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6983c6d4e8a67eea6f4983292eca43a,kubernetes.io/config.seen: 2024-09-27T17:41:48.537216911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&PodSandboxMetadata{Name:etcd-ha-748477,Uid:3ec1f007f86453df35a2f3141bc489b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458908993962377,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-748477,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: 3ec1f007f86453df35a2f3141bc489b3,kubernetes.io/config.seen: 2024-09-27T17:41:48.537210945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b6d3e4bd-c777-4103-90e5-7b6bf625b09d name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.916853322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43b3c5b7-2be2-491b-bbf4-69b12d68b2a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.917364420Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43b3c5b7-2be2-491b-bbf4-69b12d68b2a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.918128727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43b3c5b7-2be2-491b-bbf4-69b12d68b2a2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.943550588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee7d643f-086d-454e-af35-52dbc616a203 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.943644506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee7d643f-086d-454e-af35-52dbc616a203 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.944804831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b521d208-26d7-41b8-9af3-ce6140cd6e36 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.945268006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459290945244958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b521d208-26d7-41b8-9af3-ce6140cd6e36 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.945770821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fe4a7c7-92b3-4e51-b95d-16dfb1827208 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.945848583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fe4a7c7-92b3-4e51-b95d-16dfb1827208 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.946082296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fe4a7c7-92b3-4e51-b95d-16dfb1827208 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.984578353Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ca5f8ae-d7c3-46c1-80ad-938a534eae17 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.984664697Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ca5f8ae-d7c3-46c1-80ad-938a534eae17 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.986538467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6cedf83-942b-4e62-bbce-88cdae3c5572 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.986991502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459290986967654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6cedf83-942b-4e62-bbce-88cdae3c5572 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.987723377Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ffcdbe89-d535-4ed2-aec3-450aae00addd name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.987795993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ffcdbe89-d535-4ed2-aec3-450aae00addd name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:10 ha-748477 crio[659]: time="2024-09-27 17:48:10.988108535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ffcdbe89-d535-4ed2-aec3-450aae00addd name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.025249079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fe59a204-dbaa-483f-962c-42e8d2d5eb81 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.025327280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe59a204-dbaa-483f-962c-42e8d2d5eb81 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.027336091Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9d350e8-8d83-4b0f-8e3d-e987cfdc76f0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.027969615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459291027944040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9d350e8-8d83-4b0f-8e3d-e987cfdc76f0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.029414737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9676365f-d3aa-4a56-b5c7-605113b90958 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.029517830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9676365f-d3aa-4a56-b5c7-605113b90958 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:11 ha-748477 crio[659]: time="2024-09-27 17:48:11.029841003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9676365f-d3aa-4a56-b5c7-605113b90958 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82d138d00329a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   9af32827ca87e       busybox-7dff88458-j7gsn
	d07f02e11f879       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   ce8d3fbc4ee43       coredns-7c65d6cfc9-qvp2z
	de0f399d2276a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   4c986f9d250c3       coredns-7c65d6cfc9-n99lr
	a7ccc536c4df9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   37067721a3573       storage-provisioner
	cd62df5a50cfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   61f84fe579fbd       kindnet-5wl4m
	42146256b0e01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   dc1e025d5f18b       kube-proxy-p76v9
	4caed5948aafe       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   48cfa3bbc5e9d       kube-vip-ha-748477
	d2acf98043067       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f25008a681435       kube-scheduler-ha-748477
	72fe2a883c95c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   9199f6af07950       etcd-ha-748477
	c7ca45fc1dbb1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9ace3b28f636e       kube-controller-manager-ha-748477
	657c5e75829c7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9ca07019cd0cf       kube-apiserver-ha-748477
	
	
	==> coredns [d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777] <==
	[INFO] 10.244.0.4:55585 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166646s
	[INFO] 10.244.0.4:56311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002436177s
	[INFO] 10.244.0.4:45590 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110873s
	[INFO] 10.244.2.2:43192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152715s
	[INFO] 10.244.2.2:44388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177447s
	[INFO] 10.244.2.2:33554 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065853s
	[INFO] 10.244.2.2:58628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162914s
	[INFO] 10.244.1.2:38819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129715s
	[INFO] 10.244.1.2:60816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097737s
	[INFO] 10.244.1.2:36546 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014954s
	[INFO] 10.244.1.2:33829 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081077s
	[INFO] 10.244.1.2:59687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088947s
	[INFO] 10.244.0.4:40268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120362s
	[INFO] 10.244.0.4:38614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077477s
	[INFO] 10.244.0.4:40222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068679s
	[INFO] 10.244.2.2:51489 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133892s
	[INFO] 10.244.1.2:34773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000265454s
	[INFO] 10.244.0.4:56542 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227377s
	[INFO] 10.244.0.4:38585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133165s
	[INFO] 10.244.2.2:32823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133184s
	[INFO] 10.244.2.2:47801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112308s
	[INFO] 10.244.2.2:52586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146231s
	[INFO] 10.244.1.2:50376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194279s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116551s
	[INFO] 10.244.1.2:45074 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069954s
	
	
	==> coredns [de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa] <==
	[INFO] 10.244.2.2:47453 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472755s
	[INFO] 10.244.1.2:51710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208951s
	[INFO] 10.244.1.2:47395 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000128476s
	[INFO] 10.244.1.2:39764 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001916816s
	[INFO] 10.244.0.4:60403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125998s
	[INFO] 10.244.0.4:36329 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177364s
	[INFO] 10.244.0.4:33684 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001089s
	[INFO] 10.244.2.2:47662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002007928s
	[INFO] 10.244.2.2:59058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158193s
	[INFO] 10.244.2.2:40790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001715411s
	[INFO] 10.244.2.2:48349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153048s
	[INFO] 10.244.1.2:55724 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002121618s
	[INFO] 10.244.1.2:41603 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096809s
	[INFO] 10.244.1.2:57083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631103s
	[INFO] 10.244.0.4:48117 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103399s
	[INFO] 10.244.2.2:56316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155752s
	[INFO] 10.244.2.2:36039 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172138s
	[INFO] 10.244.2.2:39197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113674s
	[INFO] 10.244.1.2:59834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130099s
	[INFO] 10.244.1.2:54472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087078s
	[INFO] 10.244.1.2:42463 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079936s
	[INFO] 10.244.0.4:58994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021944s
	[INFO] 10.244.0.4:50757 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135494s
	[INFO] 10.244.2.2:35416 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170114s
	[INFO] 10.244.1.2:50172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011348s
	
	
	==> describe nodes <==
	Name:               ha-748477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:41:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:42:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-748477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492d2104e50247c88ce564105fa6e436
	  System UUID:                492d2104-e502-47c8-8ce5-64105fa6e436
	  Boot ID:                    e44f404a-867d-4f4e-a185-458196aac718
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j7gsn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-n99lr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-qvp2z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-748477                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m16s
	  kube-system                 kindnet-5wl4m                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-748477             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-controller-manager-ha-748477    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-proxy-p76v9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-748477             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-748477                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m10s  kube-proxy       
	  Normal  Starting                 6m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m16s  kubelet          Node ha-748477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m16s  kubelet          Node ha-748477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m16s  kubelet          Node ha-748477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal  NodeReady                5m59s  kubelet          Node ha-748477 status is now: NodeReady
	  Normal  RegisteredNode           5m14s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	
	
	Name:               ha-748477-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:42:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:45:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    ha-748477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a797c0b98fa454a9290261a4120ee96
	  System UUID:                1a797c0b-98fa-454a-9290-261a4120ee96
	  Boot ID:                    be8b9b76-5b30-449e-8e6a-b392c8bc637d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xmqtg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-748477-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-r9smp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-748477-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-748477-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-kxwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-748477-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-vip-ha-748477-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-748477-m02 status is now: NodeNotReady
	
	
	Name:               ha-748477-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:44:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-748477-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f10cf0e49714a128d45f579afd701d8
	  System UUID:                7f10cf0e-4971-4a12-8d45-f579afd701d8
	  Boot ID:                    8028882c-9e9e-4142-9736-fa20678b0690
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8fcc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-748477-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-66lb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-748477-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-748477-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-vwkqb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-748477-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-748477-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  RegisteredNode           4m9s                 node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-748477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	
	
	Name:               ha-748477-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_45_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:45:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-748477-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53bc6a6bc9f74a04882f5b53ace38c50
	  System UUID:                53bc6a6b-c9f7-4a04-882f-5b53ace38c50
	  Boot ID:                    797c4344-bca4-4508-93c8-92db2f3a4663
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8kdps       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m3s
	  kube-system                 kube-proxy-t92jl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-748477-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep27 17:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038191] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.766886] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.994968] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.572771] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.496309] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.056667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051200] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.195115] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.125330] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279617] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.856213] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.390156] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.062929] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.000255] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.085204] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 17:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.205900] kauditd_printk_skb: 38 callbacks suppressed
	[ +42.959337] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771] <==
	{"level":"warn","ts":"2024-09-27T17:48:11.308765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.313537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.324911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.333483Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.343337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.347295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.350874Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.356598Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.360197Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.363697Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.369343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.373471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.376788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.384497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.391156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.398215Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.398353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.404858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.408514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.413049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.420823Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.431413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.492960Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.495011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:11.497304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:48:11 up 6 min,  0 users,  load average: 0.19, 0.30, 0.17
	Linux ha-748477 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f] <==
	I0927 17:47:32.267850       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:47:42.265589       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:47:42.265785       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:47:42.266008       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:47:42.266035       1 main.go:299] handling current node
	I0927 17:47:42.266096       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:47:42.266113       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:47:42.266283       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:47:42.266312       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:47:52.271508       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:47:52.271560       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:47:52.271730       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:47:52.271751       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:47:52.271828       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:47:52.271846       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:47:52.271909       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:47:52.271927       1 main.go:299] handling current node
	I0927 17:48:02.265005       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:48:02.265095       1 main.go:299] handling current node
	I0927 17:48:02.265110       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:48:02.265116       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:48:02.265396       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:48:02.265422       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:48:02.265476       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:48:02.265494       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf] <==
	W0927 17:41:54.285503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0927 17:41:54.286484       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 17:41:54.291279       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 17:41:54.388865       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 17:41:55.517839       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 17:41:55.539342       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 17:41:55.549868       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 17:41:59.140843       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 17:42:00.286046       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 17:44:36.903808       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44866: use of closed network connection
	E0927 17:44:37.083629       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44890: use of closed network connection
	E0927 17:44:37.325665       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44898: use of closed network connection
	E0927 17:44:37.513055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44922: use of closed network connection
	E0927 17:44:37.702332       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44948: use of closed network connection
	E0927 17:44:37.883878       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44974: use of closed network connection
	E0927 17:44:38.055802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44990: use of closed network connection
	E0927 17:44:38.236694       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45008: use of closed network connection
	E0927 17:44:38.403967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45026: use of closed network connection
	E0927 17:44:38.704686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45048: use of closed network connection
	E0927 17:44:38.877491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45076: use of closed network connection
	E0927 17:44:39.052837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45094: use of closed network connection
	E0927 17:44:39.232482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45114: use of closed network connection
	E0927 17:44:39.403972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45138: use of closed network connection
	E0927 17:44:39.594519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45158: use of closed network connection
	W0927 17:46:04.298556       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.225]
	
	
	==> kube-controller-manager [c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36] <==
	I0927 17:45:08.716652       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-748477-m04\" does not exist"
	I0927 17:45:08.760763       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-748477-m04" podCIDRs=["10.244.3.0/24"]
	I0927 17:45:08.760823       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:08.760843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.011937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.385318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.574027       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-748477-m04"
	I0927 17:45:09.640869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:11.430286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:11.479780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:12.942848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:12.962049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:18.969210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:29.722225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:45:29.722369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:29.743285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:31.451751       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:39.404025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:46:24.602364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:46:24.602509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:24.628682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:24.710382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.746809ms"
	I0927 17:46:24.710519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.102µs"
	I0927 17:46:26.579533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:29.873026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	
	
	==> kube-proxy [42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 17:42:01.081502       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 17:42:01.110880       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	E0927 17:42:01.111017       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:42:01.147630       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:42:01.147672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:42:01.147695       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:42:01.150196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:42:01.150782       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:42:01.150809       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:42:01.154388       1 config.go:199] "Starting service config controller"
	I0927 17:42:01.154878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:42:01.155097       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:42:01.155116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:42:01.157808       1 config.go:328] "Starting node config controller"
	I0927 17:42:01.157840       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 17:42:01.256235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:42:01.256497       1 shared_informer.go:320] Caches are synced for service config
	I0927 17:42:01.258142       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756] <==
	E0927 17:44:02.933717       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-66lb8\": pod kindnet-66lb8 is already assigned to node \"ha-748477-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-66lb8" node="ha-748477-m03"
	E0927 17:44:02.934559       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba(kube-system/kindnet-66lb8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-66lb8"
	E0927 17:44:02.935616       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-66lb8\": pod kindnet-66lb8 is already assigned to node \"ha-748477-m03\"" pod="kube-system/kindnet-66lb8"
	I0927 17:44:02.935846       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-66lb8" node="ha-748477-m03"
	E0927 17:44:02.934408       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vwkqb\": pod kube-proxy-vwkqb is already assigned to node \"ha-748477-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vwkqb" node="ha-748477-m03"
	E0927 17:44:02.938352       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cee9a1cd-cce3-4e30-8bbe-1597f7ff4277(kube-system/kube-proxy-vwkqb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vwkqb"
	E0927 17:44:02.938437       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vwkqb\": pod kube-proxy-vwkqb is already assigned to node \"ha-748477-m03\"" pod="kube-system/kube-proxy-vwkqb"
	I0927 17:44:02.938478       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vwkqb" node="ha-748477-m03"
	E0927 17:44:31.066581       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j7gsn\": pod busybox-7dff88458-j7gsn is already assigned to node \"ha-748477\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-j7gsn" node="ha-748477"
	E0927 17:44:31.066642       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07233d33-34ed-44e8-a9d5-376e1860ca0c(default/busybox-7dff88458-j7gsn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-j7gsn"
	E0927 17:44:31.066658       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j7gsn\": pod busybox-7dff88458-j7gsn is already assigned to node \"ha-748477\"" pod="default/busybox-7dff88458-j7gsn"
	I0927 17:44:31.066676       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-j7gsn" node="ha-748477"
	E0927 17:44:31.089611       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xmqtg\": pod busybox-7dff88458-xmqtg is already assigned to node \"ha-748477-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xmqtg" node="ha-748477-m02"
	E0927 17:44:31.092159       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bd416f42-71bf-42f9-8e17-921e5b35333b(default/busybox-7dff88458-xmqtg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xmqtg"
	E0927 17:44:31.092486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xmqtg\": pod busybox-7dff88458-xmqtg is already assigned to node \"ha-748477-m02\"" pod="default/busybox-7dff88458-xmqtg"
	I0927 17:44:31.092797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xmqtg" node="ha-748477-m02"
	E0927 17:44:31.312466       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-tpc4p\" not found" pod="default/busybox-7dff88458-tpc4p"
	E0927 17:45:08.782464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.782636       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8041369a-60b6-46ac-ae40-2a232d799caf(kube-system/kindnet-gls7h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gls7h"
	E0927 17:45:08.782676       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" pod="kube-system/kindnet-gls7h"
	I0927 17:45:08.782749       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.783276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:45:08.785675       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fc28a65-d0e3-476e-bc9e-ff4e9f2e85ac(kube-system/kube-proxy-z2tnx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z2tnx"
	E0927 17:45:08.785786       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" pod="kube-system/kube-proxy-z2tnx"
	I0927 17:45:08.785868       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	
	
	==> kubelet <==
	Sep 27 17:46:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:46:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:46:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:46:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:46:55 ha-748477 kubelet[1304]: E0927 17:46:55.552924    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459215552461142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:46:55 ha-748477 kubelet[1304]: E0927 17:46:55.552961    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459215552461142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:05 ha-748477 kubelet[1304]: E0927 17:47:05.554669    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459225554270054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:05 ha-748477 kubelet[1304]: E0927 17:47:05.555306    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459225554270054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:15 ha-748477 kubelet[1304]: E0927 17:47:15.557097    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459235556635818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:15 ha-748477 kubelet[1304]: E0927 17:47:15.557135    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459235556635818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:25 ha-748477 kubelet[1304]: E0927 17:47:25.559322    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459245558659945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:25 ha-748477 kubelet[1304]: E0927 17:47:25.559377    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459245558659945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:35 ha-748477 kubelet[1304]: E0927 17:47:35.561127    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459255560855912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:35 ha-748477 kubelet[1304]: E0927 17:47:35.561197    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459255560855912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:45 ha-748477 kubelet[1304]: E0927 17:47:45.563216    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459265562750178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:45 ha-748477 kubelet[1304]: E0927 17:47:45.567283    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459265562750178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.507545    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:47:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.568682    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459275568451294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.568704    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459275568451294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:05 ha-748477 kubelet[1304]: E0927 17:48:05.570034    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459285569687152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:05 ha-748477 kubelet[1304]: E0927 17:48:05.570079    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459285569687152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-748477 -n ha-748477
helpers_test.go:261: (dbg) Run:  kubectl --context ha-748477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.419747452s)
ha_test.go:413: expected profile "ha-748477" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-748477\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-748477\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-748477\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.217\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.58\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.225\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.37\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"
metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\"
:262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-748477 -n ha-748477
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 logs -n 25: (1.343507506s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m03_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m04 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp testdata/cp-test.txt                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m04_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03:/home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m03 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-748477 node stop m02 -v=7                                                     | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:41:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:41:11.282351   33104 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:41:11.282459   33104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:41:11.282464   33104 out.go:358] Setting ErrFile to fd 2...
	I0927 17:41:11.282469   33104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:41:11.282697   33104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:41:11.283272   33104 out.go:352] Setting JSON to false
	I0927 17:41:11.284134   33104 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5016,"bootTime":1727453855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:41:11.284236   33104 start.go:139] virtualization: kvm guest
	I0927 17:41:11.286413   33104 out.go:177] * [ha-748477] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:41:11.288037   33104 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:41:11.288045   33104 notify.go:220] Checking for updates...
	I0927 17:41:11.289671   33104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:41:11.291343   33104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:11.293056   33104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.294702   33104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:41:11.296107   33104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:41:11.297727   33104 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:41:11.334964   33104 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 17:41:11.336448   33104 start.go:297] selected driver: kvm2
	I0927 17:41:11.336470   33104 start.go:901] validating driver "kvm2" against <nil>
	I0927 17:41:11.336482   33104 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:41:11.337172   33104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:41:11.337254   33104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 17:41:11.353494   33104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 17:41:11.353573   33104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 17:41:11.353841   33104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:41:11.353874   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:11.353916   33104 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 17:41:11.353921   33104 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 17:41:11.353981   33104 start.go:340] cluster config:
	{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0927 17:41:11.354070   33104 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:41:11.356133   33104 out.go:177] * Starting "ha-748477" primary control-plane node in "ha-748477" cluster
	I0927 17:41:11.357496   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:41:11.357561   33104 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 17:41:11.357574   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:41:11.357669   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:41:11.357682   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:41:11.358001   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:41:11.358028   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json: {Name:mke89db25d5d216a50900f26b95b8fd2ee54cc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:11.358189   33104 start.go:360] acquireMachinesLock for ha-748477: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:41:11.358227   33104 start.go:364] duration metric: took 22.952µs to acquireMachinesLock for "ha-748477"
	I0927 17:41:11.358249   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:41:11.358314   33104 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 17:41:11.360140   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:41:11.360316   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:11.360378   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:11.375306   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0927 17:41:11.375759   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:11.376301   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:11.376329   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:11.376675   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:11.376850   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:11.377007   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:11.377148   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:41:11.377181   33104 client.go:168] LocalClient.Create starting
	I0927 17:41:11.377218   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:41:11.377295   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:41:11.377314   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:41:11.377384   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:41:11.377413   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:41:11.377441   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:41:11.377466   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:41:11.377486   33104 main.go:141] libmachine: (ha-748477) Calling .PreCreateCheck
	I0927 17:41:11.377873   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:11.378248   33104 main.go:141] libmachine: Creating machine...
	I0927 17:41:11.378289   33104 main.go:141] libmachine: (ha-748477) Calling .Create
	I0927 17:41:11.378436   33104 main.go:141] libmachine: (ha-748477) Creating KVM machine...
	I0927 17:41:11.379983   33104 main.go:141] libmachine: (ha-748477) DBG | found existing default KVM network
	I0927 17:41:11.380694   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.380548   33127 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b50}
	I0927 17:41:11.380717   33104 main.go:141] libmachine: (ha-748477) DBG | created network xml: 
	I0927 17:41:11.380729   33104 main.go:141] libmachine: (ha-748477) DBG | <network>
	I0927 17:41:11.380736   33104 main.go:141] libmachine: (ha-748477) DBG |   <name>mk-ha-748477</name>
	I0927 17:41:11.380744   33104 main.go:141] libmachine: (ha-748477) DBG |   <dns enable='no'/>
	I0927 17:41:11.380751   33104 main.go:141] libmachine: (ha-748477) DBG |   
	I0927 17:41:11.380761   33104 main.go:141] libmachine: (ha-748477) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 17:41:11.380765   33104 main.go:141] libmachine: (ha-748477) DBG |     <dhcp>
	I0927 17:41:11.380773   33104 main.go:141] libmachine: (ha-748477) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 17:41:11.380778   33104 main.go:141] libmachine: (ha-748477) DBG |     </dhcp>
	I0927 17:41:11.380786   33104 main.go:141] libmachine: (ha-748477) DBG |   </ip>
	I0927 17:41:11.380790   33104 main.go:141] libmachine: (ha-748477) DBG |   
	I0927 17:41:11.380886   33104 main.go:141] libmachine: (ha-748477) DBG | </network>
	I0927 17:41:11.380936   33104 main.go:141] libmachine: (ha-748477) DBG | 
	I0927 17:41:11.386015   33104 main.go:141] libmachine: (ha-748477) DBG | trying to create private KVM network mk-ha-748477 192.168.39.0/24...
	I0927 17:41:11.458118   33104 main.go:141] libmachine: (ha-748477) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 ...
	I0927 17:41:11.458145   33104 main.go:141] libmachine: (ha-748477) DBG | private KVM network mk-ha-748477 192.168.39.0/24 created
	I0927 17:41:11.458158   33104 main.go:141] libmachine: (ha-748477) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:41:11.458170   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.458056   33127 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.458262   33104 main.go:141] libmachine: (ha-748477) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:41:11.695851   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.695688   33127 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa...
	I0927 17:41:11.894120   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.893958   33127 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/ha-748477.rawdisk...
	I0927 17:41:11.894152   33104 main.go:141] libmachine: (ha-748477) DBG | Writing magic tar header
	I0927 17:41:11.894162   33104 main.go:141] libmachine: (ha-748477) DBG | Writing SSH key tar header
	I0927 17:41:11.894171   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.894079   33127 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 ...
	I0927 17:41:11.894191   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477
	I0927 17:41:11.894234   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 (perms=drwx------)
	I0927 17:41:11.894262   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:41:11.894278   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:41:11.894286   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:41:11.894294   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.894300   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:41:11.894308   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:41:11.894314   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:41:11.894322   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home
	I0927 17:41:11.894332   33104 main.go:141] libmachine: (ha-748477) DBG | Skipping /home - not owner
	I0927 17:41:11.894350   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:41:11.894382   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:41:11.894396   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:41:11.894409   33104 main.go:141] libmachine: (ha-748477) Creating domain...
	I0927 17:41:11.895515   33104 main.go:141] libmachine: (ha-748477) define libvirt domain using xml: 
	I0927 17:41:11.895554   33104 main.go:141] libmachine: (ha-748477) <domain type='kvm'>
	I0927 17:41:11.895564   33104 main.go:141] libmachine: (ha-748477)   <name>ha-748477</name>
	I0927 17:41:11.895570   33104 main.go:141] libmachine: (ha-748477)   <memory unit='MiB'>2200</memory>
	I0927 17:41:11.895577   33104 main.go:141] libmachine: (ha-748477)   <vcpu>2</vcpu>
	I0927 17:41:11.895582   33104 main.go:141] libmachine: (ha-748477)   <features>
	I0927 17:41:11.895589   33104 main.go:141] libmachine: (ha-748477)     <acpi/>
	I0927 17:41:11.895594   33104 main.go:141] libmachine: (ha-748477)     <apic/>
	I0927 17:41:11.895600   33104 main.go:141] libmachine: (ha-748477)     <pae/>
	I0927 17:41:11.895611   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.895618   33104 main.go:141] libmachine: (ha-748477)   </features>
	I0927 17:41:11.895625   33104 main.go:141] libmachine: (ha-748477)   <cpu mode='host-passthrough'>
	I0927 17:41:11.895636   33104 main.go:141] libmachine: (ha-748477)   
	I0927 17:41:11.895642   33104 main.go:141] libmachine: (ha-748477)   </cpu>
	I0927 17:41:11.895652   33104 main.go:141] libmachine: (ha-748477)   <os>
	I0927 17:41:11.895658   33104 main.go:141] libmachine: (ha-748477)     <type>hvm</type>
	I0927 17:41:11.895667   33104 main.go:141] libmachine: (ha-748477)     <boot dev='cdrom'/>
	I0927 17:41:11.895677   33104 main.go:141] libmachine: (ha-748477)     <boot dev='hd'/>
	I0927 17:41:11.895684   33104 main.go:141] libmachine: (ha-748477)     <bootmenu enable='no'/>
	I0927 17:41:11.895695   33104 main.go:141] libmachine: (ha-748477)   </os>
	I0927 17:41:11.895726   33104 main.go:141] libmachine: (ha-748477)   <devices>
	I0927 17:41:11.895746   33104 main.go:141] libmachine: (ha-748477)     <disk type='file' device='cdrom'>
	I0927 17:41:11.895755   33104 main.go:141] libmachine: (ha-748477)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/boot2docker.iso'/>
	I0927 17:41:11.895767   33104 main.go:141] libmachine: (ha-748477)       <target dev='hdc' bus='scsi'/>
	I0927 17:41:11.895779   33104 main.go:141] libmachine: (ha-748477)       <readonly/>
	I0927 17:41:11.895787   33104 main.go:141] libmachine: (ha-748477)     </disk>
	I0927 17:41:11.895799   33104 main.go:141] libmachine: (ha-748477)     <disk type='file' device='disk'>
	I0927 17:41:11.895810   33104 main.go:141] libmachine: (ha-748477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:41:11.895825   33104 main.go:141] libmachine: (ha-748477)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/ha-748477.rawdisk'/>
	I0927 17:41:11.895835   33104 main.go:141] libmachine: (ha-748477)       <target dev='hda' bus='virtio'/>
	I0927 17:41:11.895843   33104 main.go:141] libmachine: (ha-748477)     </disk>
	I0927 17:41:11.895850   33104 main.go:141] libmachine: (ha-748477)     <interface type='network'>
	I0927 17:41:11.895865   33104 main.go:141] libmachine: (ha-748477)       <source network='mk-ha-748477'/>
	I0927 17:41:11.895880   33104 main.go:141] libmachine: (ha-748477)       <model type='virtio'/>
	I0927 17:41:11.895892   33104 main.go:141] libmachine: (ha-748477)     </interface>
	I0927 17:41:11.895902   33104 main.go:141] libmachine: (ha-748477)     <interface type='network'>
	I0927 17:41:11.895912   33104 main.go:141] libmachine: (ha-748477)       <source network='default'/>
	I0927 17:41:11.895923   33104 main.go:141] libmachine: (ha-748477)       <model type='virtio'/>
	I0927 17:41:11.895932   33104 main.go:141] libmachine: (ha-748477)     </interface>
	I0927 17:41:11.895944   33104 main.go:141] libmachine: (ha-748477)     <serial type='pty'>
	I0927 17:41:11.895957   33104 main.go:141] libmachine: (ha-748477)       <target port='0'/>
	I0927 17:41:11.895968   33104 main.go:141] libmachine: (ha-748477)     </serial>
	I0927 17:41:11.895990   33104 main.go:141] libmachine: (ha-748477)     <console type='pty'>
	I0927 17:41:11.896002   33104 main.go:141] libmachine: (ha-748477)       <target type='serial' port='0'/>
	I0927 17:41:11.896015   33104 main.go:141] libmachine: (ha-748477)     </console>
	I0927 17:41:11.896031   33104 main.go:141] libmachine: (ha-748477)     <rng model='virtio'>
	I0927 17:41:11.896046   33104 main.go:141] libmachine: (ha-748477)       <backend model='random'>/dev/random</backend>
	I0927 17:41:11.896060   33104 main.go:141] libmachine: (ha-748477)     </rng>
	I0927 17:41:11.896070   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.896076   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.896083   33104 main.go:141] libmachine: (ha-748477)   </devices>
	I0927 17:41:11.896087   33104 main.go:141] libmachine: (ha-748477) </domain>
	I0927 17:41:11.896095   33104 main.go:141] libmachine: (ha-748477) 
	I0927 17:41:11.900567   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:73:40:b9 in network default
	I0927 17:41:11.901061   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:11.901075   33104 main.go:141] libmachine: (ha-748477) Ensuring networks are active...
	I0927 17:41:11.901826   33104 main.go:141] libmachine: (ha-748477) Ensuring network default is active
	I0927 17:41:11.902116   33104 main.go:141] libmachine: (ha-748477) Ensuring network mk-ha-748477 is active
	I0927 17:41:11.902614   33104 main.go:141] libmachine: (ha-748477) Getting domain xml...
	I0927 17:41:11.903566   33104 main.go:141] libmachine: (ha-748477) Creating domain...
	I0927 17:41:13.125948   33104 main.go:141] libmachine: (ha-748477) Waiting to get IP...
	I0927 17:41:13.126613   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.126980   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.127001   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.126925   33127 retry.go:31] will retry after 221.741675ms: waiting for machine to come up
	I0927 17:41:13.350389   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.350866   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.350891   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.350820   33127 retry.go:31] will retry after 384.917671ms: waiting for machine to come up
	I0927 17:41:13.737469   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.737940   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.737963   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.737901   33127 retry.go:31] will retry after 357.409754ms: waiting for machine to come up
	I0927 17:41:14.096593   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:14.097137   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:14.097157   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:14.097100   33127 retry.go:31] will retry after 455.369509ms: waiting for machine to come up
	I0927 17:41:14.553700   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:14.554092   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:14.554138   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:14.554063   33127 retry.go:31] will retry after 555.024151ms: waiting for machine to come up
	I0927 17:41:15.111039   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:15.111576   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:15.111596   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:15.111511   33127 retry.go:31] will retry after 767.019564ms: waiting for machine to come up
	I0927 17:41:15.880561   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:15.880971   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:15.881009   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:15.880933   33127 retry.go:31] will retry after 930.894786ms: waiting for machine to come up
	I0927 17:41:16.814028   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:16.814547   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:16.814568   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:16.814503   33127 retry.go:31] will retry after 1.391282407s: waiting for machine to come up
	I0927 17:41:18.208116   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:18.208453   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:18.208476   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:18.208423   33127 retry.go:31] will retry after 1.406630844s: waiting for machine to come up
	I0927 17:41:19.617054   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:19.617491   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:19.617513   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:19.617444   33127 retry.go:31] will retry after 1.955568674s: waiting for machine to come up
	I0927 17:41:21.574672   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:21.575031   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:21.575056   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:21.574984   33127 retry.go:31] will retry after 2.462121776s: waiting for machine to come up
	I0927 17:41:24.039742   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:24.040176   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:24.040197   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:24.040139   33127 retry.go:31] will retry after 3.071571928s: waiting for machine to come up
	I0927 17:41:27.113044   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:27.113494   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:27.113522   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:27.113444   33127 retry.go:31] will retry after 3.158643907s: waiting for machine to come up
	I0927 17:41:30.273431   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:30.273901   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:30.273928   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:30.273851   33127 retry.go:31] will retry after 4.144134204s: waiting for machine to come up
	I0927 17:41:34.421621   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.421958   33104 main.go:141] libmachine: (ha-748477) Found IP for machine: 192.168.39.217
	I0927 17:41:34.421985   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has current primary IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.421995   33104 main.go:141] libmachine: (ha-748477) Reserving static IP address...
	I0927 17:41:34.422371   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find host DHCP lease matching {name: "ha-748477", mac: "52:54:00:cf:7b:81", ip: "192.168.39.217"} in network mk-ha-748477
	I0927 17:41:34.496658   33104 main.go:141] libmachine: (ha-748477) Reserved static IP address: 192.168.39.217
	I0927 17:41:34.496683   33104 main.go:141] libmachine: (ha-748477) Waiting for SSH to be available...
	I0927 17:41:34.496692   33104 main.go:141] libmachine: (ha-748477) DBG | Getting to WaitForSSH function...
	I0927 17:41:34.499481   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.499883   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477
	I0927 17:41:34.499908   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find defined IP address of network mk-ha-748477 interface with MAC address 52:54:00:cf:7b:81
	I0927 17:41:34.500086   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH client type: external
	I0927 17:41:34.500117   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa (-rw-------)
	I0927 17:41:34.500142   33104 main.go:141] libmachine: (ha-748477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:41:34.500152   33104 main.go:141] libmachine: (ha-748477) DBG | About to run SSH command:
	I0927 17:41:34.500164   33104 main.go:141] libmachine: (ha-748477) DBG | exit 0
	I0927 17:41:34.503851   33104 main.go:141] libmachine: (ha-748477) DBG | SSH cmd err, output: exit status 255: 
	I0927 17:41:34.503922   33104 main.go:141] libmachine: (ha-748477) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 17:41:34.503936   33104 main.go:141] libmachine: (ha-748477) DBG | command : exit 0
	I0927 17:41:34.503943   33104 main.go:141] libmachine: (ha-748477) DBG | err     : exit status 255
	I0927 17:41:34.503959   33104 main.go:141] libmachine: (ha-748477) DBG | output  : 
	I0927 17:41:37.504545   33104 main.go:141] libmachine: (ha-748477) DBG | Getting to WaitForSSH function...
	I0927 17:41:37.507144   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.507648   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.507672   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.507819   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH client type: external
	I0927 17:41:37.507868   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa (-rw-------)
	I0927 17:41:37.507900   33104 main.go:141] libmachine: (ha-748477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:41:37.507920   33104 main.go:141] libmachine: (ha-748477) DBG | About to run SSH command:
	I0927 17:41:37.507941   33104 main.go:141] libmachine: (ha-748477) DBG | exit 0
	I0927 17:41:37.630810   33104 main.go:141] libmachine: (ha-748477) DBG | SSH cmd err, output: <nil>: 
	I0927 17:41:37.631066   33104 main.go:141] libmachine: (ha-748477) KVM machine creation complete!
	I0927 17:41:37.631372   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:37.631910   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:37.632095   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:37.632272   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:41:37.632285   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:37.633516   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:41:37.633528   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:41:37.633533   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:41:37.633550   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.635751   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.636081   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.636099   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.636220   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.636388   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.636532   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.636625   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.636778   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.636951   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.636961   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:41:37.734259   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:41:37.734293   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:41:37.734303   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.737128   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.737466   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.737495   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.737627   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.737846   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.737998   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.738153   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.738274   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.738468   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.738480   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:41:37.835159   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:41:37.835214   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:41:37.835220   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:41:37.835227   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:37.835463   33104 buildroot.go:166] provisioning hostname "ha-748477"
	I0927 17:41:37.835485   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:37.835646   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.838659   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.838974   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.838995   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.839272   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.839470   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.839648   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.839769   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.839931   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.840140   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.840159   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477 && echo "ha-748477" | sudo tee /etc/hostname
	I0927 17:41:37.952689   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:41:37.952711   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.955478   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.955872   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.955904   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.956089   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.956272   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.956442   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.956569   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.956706   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.956867   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.956881   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:41:38.063375   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:41:38.063408   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:41:38.063477   33104 buildroot.go:174] setting up certificates
	I0927 17:41:38.063491   33104 provision.go:84] configureAuth start
	I0927 17:41:38.063509   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:38.063799   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.066439   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.066780   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.066808   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.066982   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.069059   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.069387   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.069405   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.069581   33104 provision.go:143] copyHostCerts
	I0927 17:41:38.069625   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:41:38.069666   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:41:38.069678   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:41:38.069763   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:41:38.069850   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:41:38.069876   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:41:38.069882   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:41:38.069916   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:41:38.069980   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:41:38.070006   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:41:38.070015   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:41:38.070049   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:41:38.070101   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477 san=[127.0.0.1 192.168.39.217 ha-748477 localhost minikube]
	I0927 17:41:38.147021   33104 provision.go:177] copyRemoteCerts
	I0927 17:41:38.147089   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:41:38.147110   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.149977   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.150246   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.150274   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.150432   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.150602   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.150754   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.150921   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.228142   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:41:38.228227   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:41:38.251467   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:41:38.251538   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 17:41:38.274370   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:41:38.274489   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:41:38.296698   33104 provision.go:87] duration metric: took 233.191722ms to configureAuth
	I0927 17:41:38.296732   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:41:38.296932   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:41:38.297016   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.299619   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.299927   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.299966   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.300128   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.300322   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.300479   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.300682   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.300851   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:38.301048   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:38.301067   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:41:38.523444   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:41:38.523472   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:41:38.523483   33104 main.go:141] libmachine: (ha-748477) Calling .GetURL
	I0927 17:41:38.524760   33104 main.go:141] libmachine: (ha-748477) DBG | Using libvirt version 6000000
	I0927 17:41:38.527048   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.527364   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.527391   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.527606   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:41:38.527637   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:41:38.527650   33104 client.go:171] duration metric: took 27.150459274s to LocalClient.Create
	I0927 17:41:38.527678   33104 start.go:167] duration metric: took 27.150528415s to libmachine.API.Create "ha-748477"
	I0927 17:41:38.527690   33104 start.go:293] postStartSetup for "ha-748477" (driver="kvm2")
	I0927 17:41:38.527705   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:41:38.527728   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.527972   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:41:38.528001   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.530216   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.530626   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.530665   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.530772   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.530924   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.531065   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.531219   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.609034   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:41:38.613222   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:41:38.613247   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:41:38.613317   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:41:38.613401   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:41:38.613411   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:41:38.613506   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:41:38.622717   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:41:38.645459   33104 start.go:296] duration metric: took 117.75234ms for postStartSetup
	I0927 17:41:38.645507   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:38.646122   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.648685   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.648941   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.648975   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.649188   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:41:38.649458   33104 start.go:128] duration metric: took 27.291131215s to createHost
	I0927 17:41:38.649491   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.651737   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.652093   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.652119   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.652302   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.652471   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.652616   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.652728   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.652843   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:38.653010   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:38.653020   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:41:38.751064   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458898.732716995
	
	I0927 17:41:38.751086   33104 fix.go:216] guest clock: 1727458898.732716995
	I0927 17:41:38.751094   33104 fix.go:229] Guest: 2024-09-27 17:41:38.732716995 +0000 UTC Remote: 2024-09-27 17:41:38.649473144 +0000 UTC m=+27.402870254 (delta=83.243851ms)
	I0927 17:41:38.751135   33104 fix.go:200] guest clock delta is within tolerance: 83.243851ms
	I0927 17:41:38.751145   33104 start.go:83] releasing machines lock for "ha-748477", held for 27.392909773s
	I0927 17:41:38.751166   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.751423   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.754190   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.754506   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.754527   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.754757   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755262   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755415   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755525   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:41:38.755565   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.755625   33104 ssh_runner.go:195] Run: cat /version.json
	I0927 17:41:38.755649   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.758113   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758305   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758445   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.758479   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758603   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.758725   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.758751   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758761   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.758893   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.758901   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.759041   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.759038   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.759157   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.759261   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.831198   33104 ssh_runner.go:195] Run: systemctl --version
	I0927 17:41:38.870670   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:41:39.025889   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:41:39.031712   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:41:39.031797   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:41:39.047705   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:41:39.047735   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:41:39.047802   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:41:39.063366   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:41:39.077273   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:41:39.077334   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:41:39.090744   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:41:39.103931   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:41:39.214425   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:41:39.364442   33104 docker.go:233] disabling docker service ...
	I0927 17:41:39.364513   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:41:39.380260   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:41:39.394355   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:41:39.522355   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:41:39.649820   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:41:39.663016   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:41:39.680505   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:41:39.680564   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.690319   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:41:39.690383   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.699872   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.709466   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.719082   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:41:39.729267   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.739369   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.757384   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.767495   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:41:39.776770   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:41:39.776822   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:41:39.789488   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:41:39.798777   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:41:39.926081   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:41:40.015516   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:41:40.015581   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:41:40.020128   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:41:40.020188   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:41:40.023698   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:41:40.059901   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:41:40.059966   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:41:40.086976   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:41:40.115858   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:41:40.117036   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:40.119598   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:40.119937   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:40.119968   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:40.120181   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:41:40.124032   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:41:40.135947   33104 kubeadm.go:883] updating cluster {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 17:41:40.136051   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:41:40.136092   33104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:41:40.165756   33104 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 17:41:40.165826   33104 ssh_runner.go:195] Run: which lz4
	I0927 17:41:40.169366   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0927 17:41:40.169454   33104 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 17:41:40.173416   33104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 17:41:40.173444   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 17:41:41.416629   33104 crio.go:462] duration metric: took 1.247195052s to copy over tarball
	I0927 17:41:41.416710   33104 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 17:41:43.420793   33104 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.004054416s)
	I0927 17:41:43.420819   33104 crio.go:469] duration metric: took 2.004155312s to extract the tarball
	I0927 17:41:43.420825   33104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 17:41:43.457422   33104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:41:43.499761   33104 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:41:43.499782   33104 cache_images.go:84] Images are preloaded, skipping loading
	I0927 17:41:43.499792   33104 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.1 crio true true} ...
	I0927 17:41:43.499910   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:41:43.499992   33104 ssh_runner.go:195] Run: crio config
	I0927 17:41:43.543198   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:43.543224   33104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 17:41:43.543236   33104 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 17:41:43.543262   33104 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-748477 NodeName:ha-748477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 17:41:43.543436   33104 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-748477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 17:41:43.543460   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:41:43.543509   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:41:43.558812   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:41:43.558948   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:41:43.559015   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:41:43.568537   33104 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 17:41:43.568607   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 17:41:43.577953   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0927 17:41:43.593972   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:41:43.611240   33104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 17:41:43.627698   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0927 17:41:43.643839   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:41:43.647475   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:41:43.658814   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:41:43.786484   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:41:43.804054   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.217
	I0927 17:41:43.804083   33104 certs.go:194] generating shared ca certs ...
	I0927 17:41:43.804104   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:43.804286   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:41:43.804341   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:41:43.804355   33104 certs.go:256] generating profile certs ...
	I0927 17:41:43.804425   33104 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:41:43.804453   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt with IP's: []
	I0927 17:41:44.048105   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt ...
	I0927 17:41:44.048135   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt: {Name:mkd7683af781c2e3035297a91fe64cae3ec441ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.048290   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key ...
	I0927 17:41:44.048301   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key: {Name:mk936ca4ca8308f6e8f8130ae52fa2d91744c76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.048375   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce
	I0927 17:41:44.048390   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.254]
	I0927 17:41:44.272337   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce ...
	I0927 17:41:44.272368   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce: {Name:mkf1d6d3812ecb98203f4090aef1221789d1a599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.272516   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce ...
	I0927 17:41:44.272528   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce: {Name:mkb32ad35d33db5f9c4a13f60989170569fbf531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.272591   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:41:44.272698   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:41:44.272754   33104 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:41:44.272768   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt with IP's: []
	I0927 17:41:44.519852   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt ...
	I0927 17:41:44.519879   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt: {Name:mk1051474491995de79f8f5636180a2c0021f95c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.520021   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key ...
	I0927 17:41:44.520031   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key: {Name:mkad9e4d33b049f5b649702366bd9b4b30c4cec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.520090   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:41:44.520107   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:41:44.520117   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:41:44.520140   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:41:44.520152   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:41:44.520167   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:41:44.520179   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:41:44.520191   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:41:44.520236   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:41:44.520268   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:41:44.520279   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:41:44.520308   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:41:44.520329   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:41:44.520350   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:41:44.520386   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:41:44.520410   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.520426   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.520438   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.521064   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:41:44.546442   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:41:44.578778   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:41:44.609231   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:41:44.633930   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 17:41:44.658617   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 17:41:44.684890   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:41:44.709741   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:41:44.734927   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:41:44.758813   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:41:44.782007   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:41:44.806214   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 17:41:44.823670   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:41:44.829647   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:41:44.840856   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.846133   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.846189   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.852561   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:41:44.864442   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:41:44.875936   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.880730   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.880801   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.886623   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:41:44.897721   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:41:44.909287   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.914201   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.914262   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.920052   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:41:44.931726   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:41:44.936188   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:41:44.936247   33104 kubeadm.go:392] StartCluster: {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:41:44.936344   33104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 17:41:44.936410   33104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 17:41:44.979358   33104 cri.go:89] found id: ""
	I0927 17:41:44.979433   33104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 17:41:44.989817   33104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 17:41:45.002904   33104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 17:41:45.014738   33104 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 17:41:45.014760   33104 kubeadm.go:157] found existing configuration files:
	
	I0927 17:41:45.014817   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 17:41:45.024092   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 17:41:45.024152   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 17:41:45.033904   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 17:41:45.043382   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 17:41:45.043439   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 17:41:45.052729   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 17:41:45.062303   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 17:41:45.062382   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 17:41:45.073359   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 17:41:45.082763   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 17:41:45.082834   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 17:41:45.093349   33104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 17:41:45.190478   33104 kubeadm.go:310] W0927 17:41:45.177079     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:41:45.191151   33104 kubeadm.go:310] W0927 17:41:45.178026     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:41:45.332459   33104 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 17:41:56.118950   33104 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 17:41:56.119025   33104 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 17:41:56.119141   33104 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 17:41:56.119282   33104 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 17:41:56.119422   33104 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 17:41:56.119505   33104 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 17:41:56.121450   33104 out.go:235]   - Generating certificates and keys ...
	I0927 17:41:56.121521   33104 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 17:41:56.121578   33104 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 17:41:56.121641   33104 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 17:41:56.121689   33104 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 17:41:56.121748   33104 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 17:41:56.121792   33104 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 17:41:56.121837   33104 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 17:41:56.121974   33104 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-748477 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0927 17:41:56.122044   33104 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 17:41:56.122168   33104 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-748477 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0927 17:41:56.122242   33104 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 17:41:56.122342   33104 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 17:41:56.122390   33104 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 17:41:56.122467   33104 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 17:41:56.122542   33104 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 17:41:56.122616   33104 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 17:41:56.122697   33104 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 17:41:56.122753   33104 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 17:41:56.122800   33104 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 17:41:56.122872   33104 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 17:41:56.122939   33104 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 17:41:56.124312   33104 out.go:235]   - Booting up control plane ...
	I0927 17:41:56.124416   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 17:41:56.124486   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 17:41:56.124538   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 17:41:56.124665   33104 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 17:41:56.124745   33104 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 17:41:56.124780   33104 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 17:41:56.124883   33104 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 17:41:56.124963   33104 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 17:41:56.125009   33104 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.127696ms
	I0927 17:41:56.125069   33104 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 17:41:56.125115   33104 kubeadm.go:310] [api-check] The API server is healthy after 6.021061385s
	I0927 17:41:56.125196   33104 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 17:41:56.125298   33104 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 17:41:56.125379   33104 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 17:41:56.125578   33104 kubeadm.go:310] [mark-control-plane] Marking the node ha-748477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 17:41:56.125630   33104 kubeadm.go:310] [bootstrap-token] Using token: hgqoqf.s456496vm8m19s9c
	I0927 17:41:56.127181   33104 out.go:235]   - Configuring RBAC rules ...
	I0927 17:41:56.127280   33104 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 17:41:56.127363   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 17:41:56.127490   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 17:41:56.127609   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 17:41:56.127704   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 17:41:56.127779   33104 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 17:41:56.127880   33104 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 17:41:56.127917   33104 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 17:41:56.127954   33104 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 17:41:56.127960   33104 kubeadm.go:310] 
	I0927 17:41:56.128007   33104 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 17:41:56.128013   33104 kubeadm.go:310] 
	I0927 17:41:56.128079   33104 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 17:41:56.128085   33104 kubeadm.go:310] 
	I0927 17:41:56.128104   33104 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 17:41:56.128151   33104 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 17:41:56.128195   33104 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 17:41:56.128202   33104 kubeadm.go:310] 
	I0927 17:41:56.128243   33104 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 17:41:56.128249   33104 kubeadm.go:310] 
	I0927 17:41:56.128286   33104 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 17:41:56.128292   33104 kubeadm.go:310] 
	I0927 17:41:56.128338   33104 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 17:41:56.128406   33104 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 17:41:56.128466   33104 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 17:41:56.128474   33104 kubeadm.go:310] 
	I0927 17:41:56.128548   33104 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 17:41:56.128620   33104 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 17:41:56.128629   33104 kubeadm.go:310] 
	I0927 17:41:56.128700   33104 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hgqoqf.s456496vm8m19s9c \
	I0927 17:41:56.128804   33104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 \
	I0927 17:41:56.128840   33104 kubeadm.go:310] 	--control-plane 
	I0927 17:41:56.128853   33104 kubeadm.go:310] 
	I0927 17:41:56.128959   33104 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 17:41:56.128965   33104 kubeadm.go:310] 
	I0927 17:41:56.129032   33104 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hgqoqf.s456496vm8m19s9c \
	I0927 17:41:56.129135   33104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 
	I0927 17:41:56.129145   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:56.129152   33104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 17:41:56.130873   33104 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 17:41:56.132138   33104 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 17:41:56.137758   33104 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 17:41:56.137776   33104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 17:41:56.158395   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 17:41:56.545302   33104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 17:41:56.545392   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:56.545450   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477 minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=true
	I0927 17:41:56.591362   33104 ops.go:34] apiserver oom_adj: -16
	I0927 17:41:56.760276   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:57.260604   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:57.760791   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:58.261339   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:58.760457   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.260517   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.760470   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.868738   33104 kubeadm.go:1113] duration metric: took 3.32341585s to wait for elevateKubeSystemPrivileges
	I0927 17:41:59.868781   33104 kubeadm.go:394] duration metric: took 14.932536309s to StartCluster
	I0927 17:41:59.868801   33104 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:59.868885   33104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:59.869758   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:59.870009   33104 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:41:59.870033   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 17:41:59.870039   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:41:59.870060   33104 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 17:41:59.870153   33104 addons.go:69] Setting storage-provisioner=true in profile "ha-748477"
	I0927 17:41:59.870163   33104 addons.go:69] Setting default-storageclass=true in profile "ha-748477"
	I0927 17:41:59.870172   33104 addons.go:234] Setting addon storage-provisioner=true in "ha-748477"
	I0927 17:41:59.870182   33104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-748477"
	I0927 17:41:59.870204   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:41:59.870252   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:41:59.870584   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.870621   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.870672   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.870714   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.886004   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0927 17:41:59.886153   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0927 17:41:59.886564   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.886600   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.887110   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.887133   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.887228   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.887251   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.887515   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.887575   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.887749   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.888058   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.888106   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.889954   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:59.890260   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 17:41:59.890780   33104 cert_rotation.go:140] Starting client certificate rotation controller
	I0927 17:41:59.891045   33104 addons.go:234] Setting addon default-storageclass=true in "ha-748477"
	I0927 17:41:59.891088   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:41:59.891458   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.891503   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.903067   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0927 17:41:59.903643   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.904195   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.904216   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.904591   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.904788   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.906479   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:59.907260   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0927 17:41:59.907760   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.908176   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.908198   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.908493   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.908731   33104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 17:41:59.909071   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.909112   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.910017   33104 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:41:59.910034   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 17:41:59.910047   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:59.912776   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.913203   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:59.913230   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.913350   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:59.913531   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:59.913696   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:59.913877   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:59.924467   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44339
	I0927 17:41:59.924928   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.925397   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.925419   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.925727   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.925908   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.927570   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:59.927761   33104 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 17:41:59.927779   33104 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 17:41:59.927796   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:59.930818   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.931197   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:59.931223   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.931372   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:59.931551   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:59.931697   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:59.931825   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:59.972954   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 17:42:00.031245   33104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:42:00.108187   33104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 17:42:00.508824   33104 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 17:42:00.769682   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.769710   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.769738   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.769760   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770044   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770066   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770083   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770095   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770104   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.770114   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770154   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.770162   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770305   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770325   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770489   33104 main.go:141] libmachine: (ha-748477) DBG | Closing plugin on server side
	I0927 17:42:00.770511   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770537   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770589   33104 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 17:42:00.770615   33104 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 17:42:00.770734   33104 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0927 17:42:00.770749   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:00.770760   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:00.770772   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:00.784878   33104 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0927 17:42:00.785650   33104 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0927 17:42:00.785672   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:00.785684   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:00.785689   33104 round_trippers.go:473]     Content-Type: application/json
	I0927 17:42:00.785695   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:00.797693   33104 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0927 17:42:00.797883   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.797901   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.798229   33104 main.go:141] libmachine: (ha-748477) DBG | Closing plugin on server side
	I0927 17:42:00.798283   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.798298   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.800228   33104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 17:42:00.801634   33104 addons.go:510] duration metric: took 931.586908ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 17:42:00.801675   33104 start.go:246] waiting for cluster config update ...
	I0927 17:42:00.801692   33104 start.go:255] writing updated cluster config ...
	I0927 17:42:00.803627   33104 out.go:201] 
	I0927 17:42:00.805265   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:00.805361   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:00.807406   33104 out.go:177] * Starting "ha-748477-m02" control-plane node in "ha-748477" cluster
	I0927 17:42:00.809474   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:42:00.809516   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:42:00.809668   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:42:00.809688   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:42:00.809795   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:00.810056   33104 start.go:360] acquireMachinesLock for ha-748477-m02: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:42:00.810115   33104 start.go:364] duration metric: took 34.075µs to acquireMachinesLock for "ha-748477-m02"
	I0927 17:42:00.810139   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:00.810241   33104 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0927 17:42:00.812114   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:42:00.812247   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:00.812304   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:00.827300   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0927 17:42:00.827815   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:00.828325   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:00.828351   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:00.828634   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:00.828813   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:00.828931   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:00.829052   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:42:00.829102   33104 client.go:168] LocalClient.Create starting
	I0927 17:42:00.829156   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:42:00.829194   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:42:00.829211   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:42:00.829254   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:42:00.829271   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:42:00.829282   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:42:00.829297   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:42:00.829305   33104 main.go:141] libmachine: (ha-748477-m02) Calling .PreCreateCheck
	I0927 17:42:00.829460   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:00.829822   33104 main.go:141] libmachine: Creating machine...
	I0927 17:42:00.829839   33104 main.go:141] libmachine: (ha-748477-m02) Calling .Create
	I0927 17:42:00.829995   33104 main.go:141] libmachine: (ha-748477-m02) Creating KVM machine...
	I0927 17:42:00.831397   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found existing default KVM network
	I0927 17:42:00.831514   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found existing private KVM network mk-ha-748477
	I0927 17:42:00.831650   33104 main.go:141] libmachine: (ha-748477-m02) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 ...
	I0927 17:42:00.831667   33104 main.go:141] libmachine: (ha-748477-m02) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:42:00.831765   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:00.831653   33474 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:42:00.831855   33104 main.go:141] libmachine: (ha-748477-m02) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:42:01.074875   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.074746   33474 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa...
	I0927 17:42:01.284394   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.284285   33474 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/ha-748477-m02.rawdisk...
	I0927 17:42:01.284431   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Writing magic tar header
	I0927 17:42:01.284445   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Writing SSH key tar header
	I0927 17:42:01.285094   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.284993   33474 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 ...
	I0927 17:42:01.285131   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02
	I0927 17:42:01.285145   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 (perms=drwx------)
	I0927 17:42:01.285162   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:42:01.285184   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:42:01.285194   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:42:01.285208   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:42:01.285223   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:42:01.285233   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:42:01.285245   33104 main.go:141] libmachine: (ha-748477-m02) Creating domain...
	I0927 17:42:01.285258   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:42:01.285272   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:42:01.285288   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:42:01.285298   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:42:01.285311   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home
	I0927 17:42:01.285320   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Skipping /home - not owner
	I0927 17:42:01.286214   33104 main.go:141] libmachine: (ha-748477-m02) define libvirt domain using xml: 
	I0927 17:42:01.286236   33104 main.go:141] libmachine: (ha-748477-m02) <domain type='kvm'>
	I0927 17:42:01.286246   33104 main.go:141] libmachine: (ha-748477-m02)   <name>ha-748477-m02</name>
	I0927 17:42:01.286259   33104 main.go:141] libmachine: (ha-748477-m02)   <memory unit='MiB'>2200</memory>
	I0927 17:42:01.286286   33104 main.go:141] libmachine: (ha-748477-m02)   <vcpu>2</vcpu>
	I0927 17:42:01.286306   33104 main.go:141] libmachine: (ha-748477-m02)   <features>
	I0927 17:42:01.286319   33104 main.go:141] libmachine: (ha-748477-m02)     <acpi/>
	I0927 17:42:01.286326   33104 main.go:141] libmachine: (ha-748477-m02)     <apic/>
	I0927 17:42:01.286334   33104 main.go:141] libmachine: (ha-748477-m02)     <pae/>
	I0927 17:42:01.286340   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286348   33104 main.go:141] libmachine: (ha-748477-m02)   </features>
	I0927 17:42:01.286353   33104 main.go:141] libmachine: (ha-748477-m02)   <cpu mode='host-passthrough'>
	I0927 17:42:01.286361   33104 main.go:141] libmachine: (ha-748477-m02)   
	I0927 17:42:01.286365   33104 main.go:141] libmachine: (ha-748477-m02)   </cpu>
	I0927 17:42:01.286372   33104 main.go:141] libmachine: (ha-748477-m02)   <os>
	I0927 17:42:01.286377   33104 main.go:141] libmachine: (ha-748477-m02)     <type>hvm</type>
	I0927 17:42:01.286386   33104 main.go:141] libmachine: (ha-748477-m02)     <boot dev='cdrom'/>
	I0927 17:42:01.286396   33104 main.go:141] libmachine: (ha-748477-m02)     <boot dev='hd'/>
	I0927 17:42:01.286408   33104 main.go:141] libmachine: (ha-748477-m02)     <bootmenu enable='no'/>
	I0927 17:42:01.286417   33104 main.go:141] libmachine: (ha-748477-m02)   </os>
	I0927 17:42:01.286442   33104 main.go:141] libmachine: (ha-748477-m02)   <devices>
	I0927 17:42:01.286465   33104 main.go:141] libmachine: (ha-748477-m02)     <disk type='file' device='cdrom'>
	I0927 17:42:01.286483   33104 main.go:141] libmachine: (ha-748477-m02)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/boot2docker.iso'/>
	I0927 17:42:01.286494   33104 main.go:141] libmachine: (ha-748477-m02)       <target dev='hdc' bus='scsi'/>
	I0927 17:42:01.286503   33104 main.go:141] libmachine: (ha-748477-m02)       <readonly/>
	I0927 17:42:01.286512   33104 main.go:141] libmachine: (ha-748477-m02)     </disk>
	I0927 17:42:01.286521   33104 main.go:141] libmachine: (ha-748477-m02)     <disk type='file' device='disk'>
	I0927 17:42:01.286532   33104 main.go:141] libmachine: (ha-748477-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:42:01.286553   33104 main.go:141] libmachine: (ha-748477-m02)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/ha-748477-m02.rawdisk'/>
	I0927 17:42:01.286577   33104 main.go:141] libmachine: (ha-748477-m02)       <target dev='hda' bus='virtio'/>
	I0927 17:42:01.286589   33104 main.go:141] libmachine: (ha-748477-m02)     </disk>
	I0927 17:42:01.286596   33104 main.go:141] libmachine: (ha-748477-m02)     <interface type='network'>
	I0927 17:42:01.286606   33104 main.go:141] libmachine: (ha-748477-m02)       <source network='mk-ha-748477'/>
	I0927 17:42:01.286615   33104 main.go:141] libmachine: (ha-748477-m02)       <model type='virtio'/>
	I0927 17:42:01.286623   33104 main.go:141] libmachine: (ha-748477-m02)     </interface>
	I0927 17:42:01.286631   33104 main.go:141] libmachine: (ha-748477-m02)     <interface type='network'>
	I0927 17:42:01.286637   33104 main.go:141] libmachine: (ha-748477-m02)       <source network='default'/>
	I0927 17:42:01.286669   33104 main.go:141] libmachine: (ha-748477-m02)       <model type='virtio'/>
	I0927 17:42:01.286682   33104 main.go:141] libmachine: (ha-748477-m02)     </interface>
	I0927 17:42:01.286689   33104 main.go:141] libmachine: (ha-748477-m02)     <serial type='pty'>
	I0927 17:42:01.286700   33104 main.go:141] libmachine: (ha-748477-m02)       <target port='0'/>
	I0927 17:42:01.286710   33104 main.go:141] libmachine: (ha-748477-m02)     </serial>
	I0927 17:42:01.286718   33104 main.go:141] libmachine: (ha-748477-m02)     <console type='pty'>
	I0927 17:42:01.286745   33104 main.go:141] libmachine: (ha-748477-m02)       <target type='serial' port='0'/>
	I0927 17:42:01.286757   33104 main.go:141] libmachine: (ha-748477-m02)     </console>
	I0927 17:42:01.286769   33104 main.go:141] libmachine: (ha-748477-m02)     <rng model='virtio'>
	I0927 17:42:01.286780   33104 main.go:141] libmachine: (ha-748477-m02)       <backend model='random'>/dev/random</backend>
	I0927 17:42:01.286789   33104 main.go:141] libmachine: (ha-748477-m02)     </rng>
	I0927 17:42:01.286798   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286805   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286814   33104 main.go:141] libmachine: (ha-748477-m02)   </devices>
	I0927 17:42:01.286821   33104 main.go:141] libmachine: (ha-748477-m02) </domain>
	I0927 17:42:01.286829   33104 main.go:141] libmachine: (ha-748477-m02) 
	I0927 17:42:01.295323   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:dc:55:b0 in network default
	I0927 17:42:01.296033   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring networks are active...
	I0927 17:42:01.296060   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:01.297259   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring network default is active
	I0927 17:42:01.297652   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring network mk-ha-748477 is active
	I0927 17:42:01.298102   33104 main.go:141] libmachine: (ha-748477-m02) Getting domain xml...
	I0927 17:42:01.298966   33104 main.go:141] libmachine: (ha-748477-m02) Creating domain...
	I0927 17:42:02.564561   33104 main.go:141] libmachine: (ha-748477-m02) Waiting to get IP...
	I0927 17:42:02.565309   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:02.565769   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:02.565802   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:02.565771   33474 retry.go:31] will retry after 303.737915ms: waiting for machine to come up
	I0927 17:42:02.871429   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:02.871830   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:02.871854   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:02.871802   33474 retry.go:31] will retry after 330.658569ms: waiting for machine to come up
	I0927 17:42:03.204264   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:03.204715   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:03.204739   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:03.204669   33474 retry.go:31] will retry after 480.920904ms: waiting for machine to come up
	I0927 17:42:03.687319   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:03.687901   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:03.687922   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:03.687827   33474 retry.go:31] will retry after 531.287792ms: waiting for machine to come up
	I0927 17:42:04.220560   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:04.221117   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:04.221147   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:04.221064   33474 retry.go:31] will retry after 645.559246ms: waiting for machine to come up
	I0927 17:42:04.867651   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:04.868069   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:04.868092   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:04.868034   33474 retry.go:31] will retry after 621.251066ms: waiting for machine to come up
	I0927 17:42:05.491583   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:05.492060   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:05.492081   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:05.492018   33474 retry.go:31] will retry after 1.144789742s: waiting for machine to come up
	I0927 17:42:06.638697   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:06.639055   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:06.639079   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:06.639012   33474 retry.go:31] will retry after 1.297542087s: waiting for machine to come up
	I0927 17:42:07.937857   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:07.938263   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:07.938304   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:07.938221   33474 retry.go:31] will retry after 1.728772395s: waiting for machine to come up
	I0927 17:42:09.668990   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:09.669424   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:09.669449   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:09.669386   33474 retry.go:31] will retry after 1.816616404s: waiting for machine to come up
	I0927 17:42:11.487206   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:11.487803   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:11.487830   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:11.487752   33474 retry.go:31] will retry after 2.262897527s: waiting for machine to come up
	I0927 17:42:13.751754   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:13.752138   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:13.752156   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:13.752109   33474 retry.go:31] will retry after 2.651419719s: waiting for machine to come up
	I0927 17:42:16.404625   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:16.405063   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:16.405087   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:16.405019   33474 retry.go:31] will retry after 2.90839218s: waiting for machine to come up
	I0927 17:42:19.317108   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:19.317506   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:19.317528   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:19.317483   33474 retry.go:31] will retry after 5.075657253s: waiting for machine to come up
	I0927 17:42:24.396494   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.396873   33104 main.go:141] libmachine: (ha-748477-m02) Found IP for machine: 192.168.39.58
	I0927 17:42:24.396891   33104 main.go:141] libmachine: (ha-748477-m02) Reserving static IP address...
	I0927 17:42:24.396899   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has current primary IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.397346   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find host DHCP lease matching {name: "ha-748477-m02", mac: "52:54:00:70:40:9e", ip: "192.168.39.58"} in network mk-ha-748477
	I0927 17:42:24.472936   33104 main.go:141] libmachine: (ha-748477-m02) Reserved static IP address: 192.168.39.58
	I0927 17:42:24.472971   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Getting to WaitForSSH function...
	I0927 17:42:24.472980   33104 main.go:141] libmachine: (ha-748477-m02) Waiting for SSH to be available...
	I0927 17:42:24.475305   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.475680   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.475707   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.475845   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using SSH client type: external
	I0927 17:42:24.475874   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa (-rw-------)
	I0927 17:42:24.475906   33104 main.go:141] libmachine: (ha-748477-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:42:24.475929   33104 main.go:141] libmachine: (ha-748477-m02) DBG | About to run SSH command:
	I0927 17:42:24.475966   33104 main.go:141] libmachine: (ha-748477-m02) DBG | exit 0
	I0927 17:42:24.606575   33104 main.go:141] libmachine: (ha-748477-m02) DBG | SSH cmd err, output: <nil>: 
	I0927 17:42:24.606899   33104 main.go:141] libmachine: (ha-748477-m02) KVM machine creation complete!
	I0927 17:42:24.607222   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:24.607761   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:24.607936   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:24.608087   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:42:24.608100   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetState
	I0927 17:42:24.609395   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:42:24.609407   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:42:24.609412   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:42:24.609417   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.611533   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.611868   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.611888   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.612022   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.612209   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.612399   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.612547   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.612697   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.612879   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.612890   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:42:24.725891   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:42:24.725919   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:42:24.725930   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.728630   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.728976   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.729006   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.729191   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.729340   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.729487   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.729609   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.729734   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.730028   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.730047   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:42:24.843111   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:42:24.843154   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:42:24.843160   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:42:24.843168   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:24.843396   33104 buildroot.go:166] provisioning hostname "ha-748477-m02"
	I0927 17:42:24.843419   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:24.843631   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.846504   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.847013   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.847039   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.847168   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.847341   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.847483   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.847608   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.847738   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.847896   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.847908   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477-m02 && echo "ha-748477-m02" | sudo tee /etc/hostname
	I0927 17:42:24.977249   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477-m02
	
	I0927 17:42:24.977281   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.980072   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.980385   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.980420   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.980605   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.980758   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.980898   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.980996   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.981123   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.981324   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.981348   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:42:25.103047   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:42:25.103077   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:42:25.103095   33104 buildroot.go:174] setting up certificates
	I0927 17:42:25.103105   33104 provision.go:84] configureAuth start
	I0927 17:42:25.103113   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:25.103329   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.105948   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.106264   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.106287   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.106466   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.109004   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.109390   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.109418   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.109562   33104 provision.go:143] copyHostCerts
	I0927 17:42:25.109608   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:42:25.109641   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:42:25.109649   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:42:25.109714   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:42:25.109782   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:42:25.109802   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:42:25.109808   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:42:25.109832   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:42:25.109873   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:42:25.109891   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:42:25.109897   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:42:25.109916   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:42:25.109964   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477-m02 san=[127.0.0.1 192.168.39.58 ha-748477-m02 localhost minikube]
	I0927 17:42:25.258618   33104 provision.go:177] copyRemoteCerts
	I0927 17:42:25.258690   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:42:25.258710   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.261212   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.261548   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.261586   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.261707   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.261895   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.262022   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.262183   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.348808   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:42:25.348876   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:42:25.372365   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:42:25.372460   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:42:25.397105   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:42:25.397179   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:42:25.422506   33104 provision.go:87] duration metric: took 319.390123ms to configureAuth
	I0927 17:42:25.422532   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:42:25.422731   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:25.422799   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.425981   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.426408   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.426451   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.426606   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.426811   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.426969   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.427088   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.427226   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:25.427394   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:25.427408   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:42:25.661521   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:42:25.661549   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:42:25.661558   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetURL
	I0927 17:42:25.662897   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using libvirt version 6000000
	I0927 17:42:25.665077   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.665379   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.665406   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.665564   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:42:25.665578   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:42:25.665585   33104 client.go:171] duration metric: took 24.836463256s to LocalClient.Create
	I0927 17:42:25.665605   33104 start.go:167] duration metric: took 24.836555157s to libmachine.API.Create "ha-748477"
	I0927 17:42:25.665614   33104 start.go:293] postStartSetup for "ha-748477-m02" (driver="kvm2")
	I0927 17:42:25.665623   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:42:25.665638   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.665877   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:42:25.665912   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.668048   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.668346   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.668368   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.668516   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.668698   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.668825   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.668921   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.756903   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:42:25.761205   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:42:25.761239   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:42:25.761301   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:42:25.761393   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:42:25.761406   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:42:25.761506   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:42:25.771507   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:42:25.794679   33104 start.go:296] duration metric: took 129.051968ms for postStartSetup
	I0927 17:42:25.794731   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:25.795430   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.797924   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.798413   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.798536   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.798704   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:25.798927   33104 start.go:128] duration metric: took 24.988675406s to createHost
	I0927 17:42:25.798952   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.801621   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.801988   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.802014   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.802223   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.802493   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.802671   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.802846   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.803001   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:25.803176   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:25.803187   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:42:25.919256   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458945.878335898
	
	I0927 17:42:25.919284   33104 fix.go:216] guest clock: 1727458945.878335898
	I0927 17:42:25.919291   33104 fix.go:229] Guest: 2024-09-27 17:42:25.878335898 +0000 UTC Remote: 2024-09-27 17:42:25.79893912 +0000 UTC m=+74.552336236 (delta=79.396778ms)
	I0927 17:42:25.919305   33104 fix.go:200] guest clock delta is within tolerance: 79.396778ms
	I0927 17:42:25.919309   33104 start.go:83] releasing machines lock for "ha-748477-m02", held for 25.109183327s
	I0927 17:42:25.919328   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.919584   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.923127   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.923545   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.923567   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.925887   33104 out.go:177] * Found network options:
	I0927 17:42:25.927311   33104 out.go:177]   - NO_PROXY=192.168.39.217
	W0927 17:42:25.928478   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:42:25.928534   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929113   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929289   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929384   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:42:25.929413   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	W0927 17:42:25.929520   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:42:25.929601   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:42:25.929627   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.932151   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932175   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932560   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.932590   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932615   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.932630   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932752   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.932954   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.932961   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.933111   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.933120   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.933235   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.933296   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.933372   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:26.183554   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:42:26.189225   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:42:26.189283   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:42:26.205357   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:42:26.205380   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:42:26.205446   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:42:26.220556   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:42:26.233593   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:42:26.233652   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:42:26.247225   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:42:26.260534   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:42:26.378535   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:42:26.534217   33104 docker.go:233] disabling docker service ...
	I0927 17:42:26.534299   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:42:26.549457   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:42:26.564190   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:42:26.685257   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:42:26.798705   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:42:26.812177   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:42:26.830049   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:42:26.830103   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.840055   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:42:26.840116   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.850116   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.860785   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.870699   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:42:26.880704   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.890585   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.908416   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.918721   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:42:26.928323   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:42:26.928384   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:42:26.941204   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:42:26.951302   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:27.079256   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:42:27.173071   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:42:27.173154   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:42:27.178109   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:42:27.178161   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:42:27.181733   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:42:27.220015   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:42:27.220101   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:42:27.248905   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:42:27.278391   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:42:27.279800   33104 out.go:177]   - env NO_PROXY=192.168.39.217
	I0927 17:42:27.281146   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:27.283736   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:27.284063   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:27.284089   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:27.284314   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:42:27.288290   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:42:27.300052   33104 mustload.go:65] Loading cluster: ha-748477
	I0927 17:42:27.300240   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:27.300504   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:27.300539   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:27.315110   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I0927 17:42:27.315566   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:27.316043   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:27.316066   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:27.316373   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:27.316560   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:42:27.317977   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:42:27.318257   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:27.318292   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:27.332715   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41483
	I0927 17:42:27.333159   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:27.333632   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:27.333651   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:27.333971   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:27.334145   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:42:27.334286   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.58
	I0927 17:42:27.334297   33104 certs.go:194] generating shared ca certs ...
	I0927 17:42:27.334310   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.334448   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:42:27.334484   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:42:27.334493   33104 certs.go:256] generating profile certs ...
	I0927 17:42:27.334557   33104 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:42:27.334581   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3
	I0927 17:42:27.334596   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.254]
	I0927 17:42:27.465658   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 ...
	I0927 17:42:27.465688   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3: {Name:mkaab33c389419b06a9d77e9186d99602df50635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.465878   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3 ...
	I0927 17:42:27.465895   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3: {Name:mkd8c2f05dd9abfddfcaec4316f440a902331ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.465985   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:42:27.466113   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:42:27.466230   33104 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:42:27.466244   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:42:27.466256   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:42:27.466270   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:42:27.466282   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:42:27.466294   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:42:27.466308   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:42:27.466321   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:42:27.466333   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:42:27.466389   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:42:27.466416   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:42:27.466425   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:42:27.466444   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:42:27.466466   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:42:27.466487   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:42:27.466523   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:42:27.466547   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:42:27.466560   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:42:27.466572   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:27.466601   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:42:27.469497   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:27.469863   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:42:27.469893   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:27.470027   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:42:27.470244   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:42:27.470394   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:42:27.470523   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:42:27.543106   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 17:42:27.548154   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 17:42:27.558735   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 17:42:27.563158   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 17:42:27.573602   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 17:42:27.578182   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 17:42:27.588485   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 17:42:27.592478   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0927 17:42:27.603608   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 17:42:27.607668   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 17:42:27.620252   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 17:42:27.624885   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 17:42:27.644493   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:42:27.668339   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:42:27.691150   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:42:27.715241   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:42:27.738617   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 17:42:27.761798   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 17:42:27.784499   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:42:27.807853   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:42:27.830972   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:42:27.853871   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:42:27.876810   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:42:27.900824   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 17:42:27.917097   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 17:42:27.933218   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 17:42:27.951040   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0927 17:42:27.967600   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 17:42:27.984161   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 17:42:28.000351   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 17:42:28.016844   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:42:28.022390   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:42:28.032675   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.037756   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.037825   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.043874   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:42:28.054764   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:42:28.065690   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.070320   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.070397   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.075845   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:42:28.086186   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:42:28.096788   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.101134   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.101189   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.106935   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:42:28.117866   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:42:28.122166   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:42:28.122230   33104 kubeadm.go:934] updating node {m02 192.168.39.58 8443 v1.31.1 crio true true} ...
	I0927 17:42:28.122310   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:42:28.122340   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:42:28.122374   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:42:28.138780   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:42:28.138839   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:42:28.138889   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:42:28.148160   33104 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 17:42:28.148222   33104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 17:42:28.157728   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 17:42:28.157755   33104 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0927 17:42:28.157763   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:42:28.157776   33104 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0927 17:42:28.157830   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:42:28.161980   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 17:42:28.162007   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 17:42:29.300439   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:42:29.320131   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:42:29.320267   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:42:29.326589   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 17:42:29.326624   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 17:42:29.546925   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:42:29.547011   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:42:29.561849   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 17:42:29.561885   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 17:42:29.913564   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 17:42:29.925322   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 17:42:29.944272   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:42:29.964365   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:42:29.984051   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:42:29.988161   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:42:30.002830   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:30.137318   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:42:30.153192   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:42:30.153643   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:30.153695   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:30.169225   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0927 17:42:30.169762   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:30.170299   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:30.170317   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:30.170628   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:30.170823   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:42:30.170945   33104 start.go:317] joinCluster: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:42:30.171062   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 17:42:30.171085   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:42:30.174028   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:30.174526   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:42:30.174587   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:30.174767   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:42:30.174933   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:42:30.175042   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:42:30.175135   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:42:30.312283   33104 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:30.312328   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 65pjfr.i6bbe1dq2ien9ht7 --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m02 --control-plane --apiserver-advertise-address=192.168.39.58 --apiserver-bind-port=8443"
	I0927 17:42:51.845707   33104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 65pjfr.i6bbe1dq2ien9ht7 --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m02 --control-plane --apiserver-advertise-address=192.168.39.58 --apiserver-bind-port=8443": (21.533351476s)
	I0927 17:42:51.845746   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 17:42:52.382325   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477-m02 minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=false
	I0927 17:42:52.503362   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-748477-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 17:42:52.636002   33104 start.go:319] duration metric: took 22.465049006s to joinCluster
	I0927 17:42:52.636077   33104 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:52.636363   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:52.637939   33104 out.go:177] * Verifying Kubernetes components...
	I0927 17:42:52.639336   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:52.942345   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:42:52.995016   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:42:52.995348   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 17:42:52.995436   33104 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0927 17:42:52.995698   33104 node_ready.go:35] waiting up to 6m0s for node "ha-748477-m02" to be "Ready" ...
	I0927 17:42:52.995829   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:52.995840   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:52.995852   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:52.995860   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:53.010565   33104 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0927 17:42:53.496570   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:53.496600   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:53.496611   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:53.496618   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:53.501635   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:53.996537   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:53.996562   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:53.996573   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:53.996580   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.000293   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:54.496339   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:54.496367   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:54.496379   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:54.496386   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.500335   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:54.996231   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:54.996259   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:54.996267   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:54.996270   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.999765   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:55.000291   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:55.496156   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:55.496179   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:55.496190   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:55.496194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:55.499869   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:55.995928   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:55.995956   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:55.995967   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:55.995976   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:56.000264   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:56.496233   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:56.496262   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:56.496274   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:56.496280   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:56.508959   33104 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0927 17:42:56.996002   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:56.996027   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:56.996035   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:56.996039   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:57.000487   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:57.001143   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:57.496517   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:57.496539   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:57.496547   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:57.496551   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:57.500687   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:57.996942   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:57.996968   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:57.996980   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:57.996985   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:58.007878   33104 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0927 17:42:58.495950   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:58.495978   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:58.495986   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:58.495992   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:58.502154   33104 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0927 17:42:58.995965   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:58.995987   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:58.995994   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:58.995999   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:59.120906   33104 round_trippers.go:574] Response Status: 200 OK in 124 milliseconds
	I0927 17:42:59.121564   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:59.496878   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:59.496899   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:59.496907   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:59.496913   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:59.500334   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:59.996861   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:59.996891   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:59.996904   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:59.996909   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:00.000651   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:00.496984   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:00.497010   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:00.497020   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:00.497025   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:00.501929   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:00.996193   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:00.996216   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:00.996224   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:00.996228   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:01.000081   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:01.496245   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:01.496271   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:01.496280   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:01.496289   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:01.500327   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:01.500876   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:01.996256   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:01.996293   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:01.996319   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:01.996323   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:02.000731   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:02.496770   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:02.496794   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:02.496807   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:02.496811   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:02.499906   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:02.996753   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:02.996778   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:02.996788   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:02.996794   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.000162   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:03.496074   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:03.496103   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:03.496115   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:03.496122   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.500371   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:03.500905   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:03.996146   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:03.996168   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:03.996176   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:03.996180   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.999817   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:04.496897   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:04.496927   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:04.496938   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:04.496946   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:04.501634   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:04.996866   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:04.996886   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:04.996894   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:04.996899   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.000028   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:05.496388   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:05.496410   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:05.496417   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.496421   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:05.501021   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:05.501573   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:05.996337   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:05.996362   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:05.996371   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.996376   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:05.999502   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:06.496159   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:06.496185   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:06.496196   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:06.496201   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:06.499954   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:06.996765   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:06.996784   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:06.996792   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:06.996796   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:07.000129   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:07.496829   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:07.496853   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:07.496864   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:07.496868   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:07.499884   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:07.996447   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:07.996472   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:07.996480   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:07.996485   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:08.000400   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:08.001102   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:08.496398   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:08.496428   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:08.496436   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:08.496440   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:08.499609   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:08.996547   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:08.996584   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:08.996595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:08.996600   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:09.000044   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:09.495922   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:09.495945   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:09.495953   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:09.495957   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:09.500237   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:09.996168   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:09.996191   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:09.996199   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:09.996202   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.000717   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:10.001176   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:10.496022   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:10.496057   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.496065   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.496068   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.500059   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.500678   33104 node_ready.go:49] node "ha-748477-m02" has status "Ready":"True"
	I0927 17:43:10.500698   33104 node_ready.go:38] duration metric: took 17.504959286s for node "ha-748477-m02" to be "Ready" ...
	I0927 17:43:10.500708   33104 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:43:10.500784   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:10.500794   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.500801   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.500807   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.509536   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:43:10.516733   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.516818   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n99lr
	I0927 17:43:10.516827   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.516834   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.516839   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.520256   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.520854   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.520869   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.520876   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.520880   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.523812   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.524358   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.524373   33104 pod_ready.go:82] duration metric: took 7.610815ms for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.524381   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.524430   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qvp2z
	I0927 17:43:10.524439   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.524446   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.524450   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.527923   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.528592   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.528607   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.528614   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.528619   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.531438   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.532103   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.532118   33104 pod_ready.go:82] duration metric: took 7.732114ms for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.532126   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.532176   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477
	I0927 17:43:10.532184   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.532190   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.532194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.534800   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.535485   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.535500   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.535508   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.535514   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.539175   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.539692   33104 pod_ready.go:93] pod "etcd-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.539712   33104 pod_ready.go:82] duration metric: took 7.578916ms for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.539724   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.539792   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m02
	I0927 17:43:10.539803   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.539813   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.539818   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.542127   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.542656   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:10.542672   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.542680   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.542687   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.545034   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.545710   33104 pod_ready.go:93] pod "etcd-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.545724   33104 pod_ready.go:82] duration metric: took 5.993851ms for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.545736   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.697130   33104 request.go:632] Waited for 151.318503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:43:10.697197   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:43:10.697205   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.697216   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.697225   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.700810   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.896840   33104 request.go:632] Waited for 195.326418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.896917   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.896923   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.896933   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.896941   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.900668   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.901151   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.901172   33104 pod_ready.go:82] duration metric: took 355.430016ms for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.901182   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.096351   33104 request.go:632] Waited for 195.090932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:43:11.096408   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:43:11.096414   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.096422   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.096425   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.099605   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:11.296522   33104 request.go:632] Waited for 196.379972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:11.296583   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:11.296588   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.296595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.296599   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.299521   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:11.299966   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:11.299983   33104 pod_ready.go:82] duration metric: took 398.795354ms for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.299992   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.496407   33104 request.go:632] Waited for 196.359677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:43:11.496465   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:43:11.496470   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.496478   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.496483   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.503613   33104 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0927 17:43:11.696825   33104 request.go:632] Waited for 192.418859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:11.696922   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:11.696934   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.696944   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.696952   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.700522   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:11.701092   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:11.701110   33104 pod_ready.go:82] duration metric: took 401.113109ms for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.701119   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.896057   33104 request.go:632] Waited for 194.879526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:43:11.896120   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:43:11.896126   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.896132   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.896136   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.899805   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.096909   33104 request.go:632] Waited for 196.394213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.096966   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.096971   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.096978   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.096983   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.100042   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.100632   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.100653   33104 pod_ready.go:82] duration metric: took 399.528293ms for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.100663   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.296780   33104 request.go:632] Waited for 196.049394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:43:12.296852   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:43:12.296857   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.296864   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.296868   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.300216   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.497120   33104 request.go:632] Waited for 195.887177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.497190   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.497198   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.497208   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.497214   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.500765   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.501287   33104 pod_ready.go:93] pod "kube-proxy-kxwmh" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.501308   33104 pod_ready.go:82] duration metric: took 400.639485ms for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.501318   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.696369   33104 request.go:632] Waited for 194.968904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:43:12.696426   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:43:12.696431   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.696440   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.696444   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.699706   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.896719   33104 request.go:632] Waited for 196.366182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:12.896803   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:12.896809   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.896816   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.896823   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.900077   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.900632   33104 pod_ready.go:93] pod "kube-proxy-p76v9" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.900654   33104 pod_ready.go:82] duration metric: took 399.328849ms for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.900664   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.096686   33104 request.go:632] Waited for 195.950266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:43:13.096742   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:43:13.096747   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.096754   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.096758   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.099788   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.296662   33104 request.go:632] Waited for 196.364642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:13.296715   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:13.296720   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.296727   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.296730   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.299832   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.300287   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:13.300305   33104 pod_ready.go:82] duration metric: took 399.635674ms for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.300314   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.496503   33104 request.go:632] Waited for 196.090954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:43:13.496579   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:43:13.496587   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.496595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.496602   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.500814   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:13.697121   33104 request.go:632] Waited for 195.399465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:13.697197   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:13.697205   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.697216   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.697223   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.700589   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.701018   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:13.701040   33104 pod_ready.go:82] duration metric: took 400.71901ms for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.701054   33104 pod_ready.go:39] duration metric: took 3.200329427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:43:13.701073   33104 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:43:13.701127   33104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:43:13.716701   33104 api_server.go:72] duration metric: took 21.080586953s to wait for apiserver process to appear ...
	I0927 17:43:13.716724   33104 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:43:13.716745   33104 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0927 17:43:13.721063   33104 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0927 17:43:13.721136   33104 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0927 17:43:13.721142   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.721150   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.721159   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.722231   33104 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 17:43:13.722325   33104 api_server.go:141] control plane version: v1.31.1
	I0927 17:43:13.722340   33104 api_server.go:131] duration metric: took 5.610564ms to wait for apiserver health ...
	I0927 17:43:13.722347   33104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:43:13.896697   33104 request.go:632] Waited for 174.282639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:13.896775   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:13.896782   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.896793   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.896800   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.901747   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:13.907225   33104 system_pods.go:59] 17 kube-system pods found
	I0927 17:43:13.907254   33104 system_pods.go:61] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:43:13.907259   33104 system_pods.go:61] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:43:13.907264   33104 system_pods.go:61] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:43:13.907268   33104 system_pods.go:61] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:43:13.907271   33104 system_pods.go:61] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:43:13.907274   33104 system_pods.go:61] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:43:13.907278   33104 system_pods.go:61] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:43:13.907282   33104 system_pods.go:61] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:43:13.907285   33104 system_pods.go:61] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:43:13.907288   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:43:13.907293   33104 system_pods.go:61] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:43:13.907296   33104 system_pods.go:61] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:43:13.907302   33104 system_pods.go:61] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:43:13.907305   33104 system_pods.go:61] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:43:13.907308   33104 system_pods.go:61] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:43:13.907311   33104 system_pods.go:61] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:43:13.907314   33104 system_pods.go:61] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:43:13.907321   33104 system_pods.go:74] duration metric: took 184.96747ms to wait for pod list to return data ...
	I0927 17:43:13.907331   33104 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:43:14.096832   33104 request.go:632] Waited for 189.427057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:43:14.096891   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:43:14.096897   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.096905   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.096909   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.100749   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:14.101009   33104 default_sa.go:45] found service account: "default"
	I0927 17:43:14.101029   33104 default_sa.go:55] duration metric: took 193.692837ms for default service account to be created ...
	I0927 17:43:14.101037   33104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:43:14.296482   33104 request.go:632] Waited for 195.378336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:14.296581   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:14.296592   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.296603   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.296611   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.300663   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:14.305343   33104 system_pods.go:86] 17 kube-system pods found
	I0927 17:43:14.305387   33104 system_pods.go:89] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:43:14.305393   33104 system_pods.go:89] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:43:14.305397   33104 system_pods.go:89] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:43:14.305401   33104 system_pods.go:89] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:43:14.305405   33104 system_pods.go:89] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:43:14.305410   33104 system_pods.go:89] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:43:14.305415   33104 system_pods.go:89] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:43:14.305419   33104 system_pods.go:89] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:43:14.305423   33104 system_pods.go:89] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:43:14.305427   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:43:14.305435   33104 system_pods.go:89] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:43:14.305438   33104 system_pods.go:89] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:43:14.305442   33104 system_pods.go:89] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:43:14.305446   33104 system_pods.go:89] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:43:14.305450   33104 system_pods.go:89] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:43:14.305454   33104 system_pods.go:89] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:43:14.305457   33104 system_pods.go:89] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:43:14.305464   33104 system_pods.go:126] duration metric: took 204.421896ms to wait for k8s-apps to be running ...
	I0927 17:43:14.305470   33104 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:43:14.305515   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:43:14.319602   33104 system_svc.go:56] duration metric: took 14.121235ms WaitForService to wait for kubelet
	I0927 17:43:14.319638   33104 kubeadm.go:582] duration metric: took 21.683524227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:43:14.319663   33104 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:43:14.497069   33104 request.go:632] Waited for 177.328804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0927 17:43:14.497147   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0927 17:43:14.497154   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.497163   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.497168   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.500866   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:14.501573   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:43:14.501596   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:43:14.501610   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:43:14.501614   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:43:14.501620   33104 node_conditions.go:105] duration metric: took 181.9516ms to run NodePressure ...
	I0927 17:43:14.501634   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:43:14.501664   33104 start.go:255] writing updated cluster config ...
	I0927 17:43:14.503659   33104 out.go:201] 
	I0927 17:43:14.505222   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:14.505350   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:14.506867   33104 out.go:177] * Starting "ha-748477-m03" control-plane node in "ha-748477" cluster
	I0927 17:43:14.508071   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:43:14.508097   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:43:14.508199   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:43:14.508212   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:43:14.508319   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:14.508514   33104 start.go:360] acquireMachinesLock for ha-748477-m03: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:43:14.508582   33104 start.go:364] duration metric: took 33.744µs to acquireMachinesLock for "ha-748477-m03"
	I0927 17:43:14.508607   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:43:14.508723   33104 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0927 17:43:14.510363   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:43:14.510454   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:14.510494   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:14.525333   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0927 17:43:14.525777   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:14.526245   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:14.526298   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:14.526634   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:14.526863   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:14.527027   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:14.527179   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:43:14.527207   33104 client.go:168] LocalClient.Create starting
	I0927 17:43:14.527244   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:43:14.527283   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:43:14.527300   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:43:14.527373   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:43:14.527399   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:43:14.527413   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:43:14.527437   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:43:14.527447   33104 main.go:141] libmachine: (ha-748477-m03) Calling .PreCreateCheck
	I0927 17:43:14.527643   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:14.528097   33104 main.go:141] libmachine: Creating machine...
	I0927 17:43:14.528113   33104 main.go:141] libmachine: (ha-748477-m03) Calling .Create
	I0927 17:43:14.528262   33104 main.go:141] libmachine: (ha-748477-m03) Creating KVM machine...
	I0927 17:43:14.529473   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found existing default KVM network
	I0927 17:43:14.529581   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found existing private KVM network mk-ha-748477
	I0927 17:43:14.529722   33104 main.go:141] libmachine: (ha-748477-m03) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 ...
	I0927 17:43:14.529748   33104 main.go:141] libmachine: (ha-748477-m03) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:43:14.529795   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.529703   33861 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:43:14.529867   33104 main.go:141] libmachine: (ha-748477-m03) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:43:14.759285   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.759157   33861 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa...
	I0927 17:43:14.801359   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.801230   33861 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/ha-748477-m03.rawdisk...
	I0927 17:43:14.801398   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Writing magic tar header
	I0927 17:43:14.801441   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Writing SSH key tar header
	I0927 17:43:14.801464   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.801363   33861 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 ...
	I0927 17:43:14.801486   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03
	I0927 17:43:14.801542   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 (perms=drwx------)
	I0927 17:43:14.801588   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:43:14.801602   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:43:14.801611   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:43:14.801620   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:43:14.801631   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:43:14.801640   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:43:14.801647   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:43:14.801654   33104 main.go:141] libmachine: (ha-748477-m03) Creating domain...
	I0927 17:43:14.801662   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:43:14.801670   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:43:14.801678   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:43:14.801683   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home
	I0927 17:43:14.801690   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Skipping /home - not owner
	I0927 17:43:14.802911   33104 main.go:141] libmachine: (ha-748477-m03) define libvirt domain using xml: 
	I0927 17:43:14.802928   33104 main.go:141] libmachine: (ha-748477-m03) <domain type='kvm'>
	I0927 17:43:14.802938   33104 main.go:141] libmachine: (ha-748477-m03)   <name>ha-748477-m03</name>
	I0927 17:43:14.802946   33104 main.go:141] libmachine: (ha-748477-m03)   <memory unit='MiB'>2200</memory>
	I0927 17:43:14.802953   33104 main.go:141] libmachine: (ha-748477-m03)   <vcpu>2</vcpu>
	I0927 17:43:14.802962   33104 main.go:141] libmachine: (ha-748477-m03)   <features>
	I0927 17:43:14.802968   33104 main.go:141] libmachine: (ha-748477-m03)     <acpi/>
	I0927 17:43:14.802975   33104 main.go:141] libmachine: (ha-748477-m03)     <apic/>
	I0927 17:43:14.802985   33104 main.go:141] libmachine: (ha-748477-m03)     <pae/>
	I0927 17:43:14.802993   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803022   33104 main.go:141] libmachine: (ha-748477-m03)   </features>
	I0927 17:43:14.803039   33104 main.go:141] libmachine: (ha-748477-m03)   <cpu mode='host-passthrough'>
	I0927 17:43:14.803047   33104 main.go:141] libmachine: (ha-748477-m03)   
	I0927 17:43:14.803056   33104 main.go:141] libmachine: (ha-748477-m03)   </cpu>
	I0927 17:43:14.803062   33104 main.go:141] libmachine: (ha-748477-m03)   <os>
	I0927 17:43:14.803067   33104 main.go:141] libmachine: (ha-748477-m03)     <type>hvm</type>
	I0927 17:43:14.803073   33104 main.go:141] libmachine: (ha-748477-m03)     <boot dev='cdrom'/>
	I0927 17:43:14.803077   33104 main.go:141] libmachine: (ha-748477-m03)     <boot dev='hd'/>
	I0927 17:43:14.803084   33104 main.go:141] libmachine: (ha-748477-m03)     <bootmenu enable='no'/>
	I0927 17:43:14.803090   33104 main.go:141] libmachine: (ha-748477-m03)   </os>
	I0927 17:43:14.803095   33104 main.go:141] libmachine: (ha-748477-m03)   <devices>
	I0927 17:43:14.803102   33104 main.go:141] libmachine: (ha-748477-m03)     <disk type='file' device='cdrom'>
	I0927 17:43:14.803110   33104 main.go:141] libmachine: (ha-748477-m03)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/boot2docker.iso'/>
	I0927 17:43:14.803116   33104 main.go:141] libmachine: (ha-748477-m03)       <target dev='hdc' bus='scsi'/>
	I0927 17:43:14.803122   33104 main.go:141] libmachine: (ha-748477-m03)       <readonly/>
	I0927 17:43:14.803131   33104 main.go:141] libmachine: (ha-748477-m03)     </disk>
	I0927 17:43:14.803140   33104 main.go:141] libmachine: (ha-748477-m03)     <disk type='file' device='disk'>
	I0927 17:43:14.803152   33104 main.go:141] libmachine: (ha-748477-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:43:14.803173   33104 main.go:141] libmachine: (ha-748477-m03)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/ha-748477-m03.rawdisk'/>
	I0927 17:43:14.803187   33104 main.go:141] libmachine: (ha-748477-m03)       <target dev='hda' bus='virtio'/>
	I0927 17:43:14.803204   33104 main.go:141] libmachine: (ha-748477-m03)     </disk>
	I0927 17:43:14.803214   33104 main.go:141] libmachine: (ha-748477-m03)     <interface type='network'>
	I0927 17:43:14.803232   33104 main.go:141] libmachine: (ha-748477-m03)       <source network='mk-ha-748477'/>
	I0927 17:43:14.803250   33104 main.go:141] libmachine: (ha-748477-m03)       <model type='virtio'/>
	I0927 17:43:14.803301   33104 main.go:141] libmachine: (ha-748477-m03)     </interface>
	I0927 17:43:14.803324   33104 main.go:141] libmachine: (ha-748477-m03)     <interface type='network'>
	I0927 17:43:14.803338   33104 main.go:141] libmachine: (ha-748477-m03)       <source network='default'/>
	I0927 17:43:14.803347   33104 main.go:141] libmachine: (ha-748477-m03)       <model type='virtio'/>
	I0927 17:43:14.803356   33104 main.go:141] libmachine: (ha-748477-m03)     </interface>
	I0927 17:43:14.803366   33104 main.go:141] libmachine: (ha-748477-m03)     <serial type='pty'>
	I0927 17:43:14.803374   33104 main.go:141] libmachine: (ha-748477-m03)       <target port='0'/>
	I0927 17:43:14.803386   33104 main.go:141] libmachine: (ha-748477-m03)     </serial>
	I0927 17:43:14.803397   33104 main.go:141] libmachine: (ha-748477-m03)     <console type='pty'>
	I0927 17:43:14.803409   33104 main.go:141] libmachine: (ha-748477-m03)       <target type='serial' port='0'/>
	I0927 17:43:14.803420   33104 main.go:141] libmachine: (ha-748477-m03)     </console>
	I0927 17:43:14.803429   33104 main.go:141] libmachine: (ha-748477-m03)     <rng model='virtio'>
	I0927 17:43:14.803439   33104 main.go:141] libmachine: (ha-748477-m03)       <backend model='random'>/dev/random</backend>
	I0927 17:43:14.803448   33104 main.go:141] libmachine: (ha-748477-m03)     </rng>
	I0927 17:43:14.803456   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803464   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803470   33104 main.go:141] libmachine: (ha-748477-m03)   </devices>
	I0927 17:43:14.803478   33104 main.go:141] libmachine: (ha-748477-m03) </domain>
	I0927 17:43:14.803488   33104 main.go:141] libmachine: (ha-748477-m03) 
	I0927 17:43:14.809886   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:46:4f:8f in network default
	I0927 17:43:14.810424   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring networks are active...
	I0927 17:43:14.810447   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:14.811161   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring network default is active
	I0927 17:43:14.811552   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring network mk-ha-748477 is active
	I0927 17:43:14.811864   33104 main.go:141] libmachine: (ha-748477-m03) Getting domain xml...
	I0927 17:43:14.812640   33104 main.go:141] libmachine: (ha-748477-m03) Creating domain...
	I0927 17:43:16.061728   33104 main.go:141] libmachine: (ha-748477-m03) Waiting to get IP...
	I0927 17:43:16.062561   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.063038   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.063058   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.062985   33861 retry.go:31] will retry after 274.225477ms: waiting for machine to come up
	I0927 17:43:16.338624   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.339183   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.339208   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.339134   33861 retry.go:31] will retry after 249.930567ms: waiting for machine to come up
	I0927 17:43:16.590699   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.591137   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.591158   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.591098   33861 retry.go:31] will retry after 427.975523ms: waiting for machine to come up
	I0927 17:43:17.021029   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.021704   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.021792   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.021629   33861 retry.go:31] will retry after 377.570175ms: waiting for machine to come up
	I0927 17:43:17.401315   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.401764   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.401789   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.401730   33861 retry.go:31] will retry after 480.401499ms: waiting for machine to come up
	I0927 17:43:17.883333   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.883876   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.883904   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.883818   33861 retry.go:31] will retry after 806.335644ms: waiting for machine to come up
	I0927 17:43:18.691641   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:18.692132   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:18.692163   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:18.692063   33861 retry.go:31] will retry after 996.155949ms: waiting for machine to come up
	I0927 17:43:19.690169   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:19.690576   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:19.690600   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:19.690536   33861 retry.go:31] will retry after 1.280499747s: waiting for machine to come up
	I0927 17:43:20.972507   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:20.972924   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:20.972949   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:20.972873   33861 retry.go:31] will retry after 1.740341439s: waiting for machine to come up
	I0927 17:43:22.715948   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:22.716453   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:22.716480   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:22.716399   33861 retry.go:31] will retry after 2.220570146s: waiting for machine to come up
	I0927 17:43:24.939094   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:24.939777   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:24.939807   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:24.939729   33861 retry.go:31] will retry after 1.898000228s: waiting for machine to come up
	I0927 17:43:26.839799   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:26.840424   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:26.840450   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:26.840370   33861 retry.go:31] will retry after 3.204742412s: waiting for machine to come up
	I0927 17:43:30.046789   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:30.047236   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:30.047261   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:30.047187   33861 retry.go:31] will retry after 3.849840599s: waiting for machine to come up
	I0927 17:43:33.899866   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:33.900417   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:33.900443   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:33.900384   33861 retry.go:31] will retry after 4.029402489s: waiting for machine to come up
	I0927 17:43:37.931866   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:37.932267   33104 main.go:141] libmachine: (ha-748477-m03) Found IP for machine: 192.168.39.225
	I0927 17:43:37.932289   33104 main.go:141] libmachine: (ha-748477-m03) Reserving static IP address...
	I0927 17:43:37.932301   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has current primary IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:37.932706   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find host DHCP lease matching {name: "ha-748477-m03", mac: "52:54:00:bf:59:33", ip: "192.168.39.225"} in network mk-ha-748477
	I0927 17:43:38.014671   33104 main.go:141] libmachine: (ha-748477-m03) Reserved static IP address: 192.168.39.225
	I0927 17:43:38.014703   33104 main.go:141] libmachine: (ha-748477-m03) Waiting for SSH to be available...
	I0927 17:43:38.014712   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Getting to WaitForSSH function...
	I0927 17:43:38.017503   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.018016   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.018038   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.018293   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using SSH client type: external
	I0927 17:43:38.018324   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa (-rw-------)
	I0927 17:43:38.018358   33104 main.go:141] libmachine: (ha-748477-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:43:38.018375   33104 main.go:141] libmachine: (ha-748477-m03) DBG | About to run SSH command:
	I0927 17:43:38.018391   33104 main.go:141] libmachine: (ha-748477-m03) DBG | exit 0
	I0927 17:43:38.146846   33104 main.go:141] libmachine: (ha-748477-m03) DBG | SSH cmd err, output: <nil>: 
	I0927 17:43:38.147182   33104 main.go:141] libmachine: (ha-748477-m03) KVM machine creation complete!
	I0927 17:43:38.147465   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:38.148028   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:38.148248   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:38.148515   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:43:38.148529   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetState
	I0927 17:43:38.150026   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:43:38.150038   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:43:38.150043   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:43:38.150053   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.152279   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.152703   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.152731   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.152930   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.153090   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.153241   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.153385   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.153555   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.153754   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.153768   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:43:38.265876   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:43:38.265897   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:43:38.265904   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.268621   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.269047   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.269076   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.269294   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.269526   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.269745   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.269874   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.270033   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.270230   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.270243   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:43:38.383161   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:43:38.383229   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:43:38.383244   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:43:38.383259   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.383511   33104 buildroot.go:166] provisioning hostname "ha-748477-m03"
	I0927 17:43:38.383534   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.383702   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.386560   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.386936   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.386960   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.387130   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.387316   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.387515   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.387694   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.387876   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.388053   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.388066   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477-m03 && echo "ha-748477-m03" | sudo tee /etc/hostname
	I0927 17:43:38.517221   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477-m03
	
	I0927 17:43:38.517257   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.520130   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.520637   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.520668   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.520845   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.521018   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.521146   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.521319   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.521531   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.521692   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.521708   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:43:38.647377   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:43:38.647402   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:43:38.647415   33104 buildroot.go:174] setting up certificates
	I0927 17:43:38.647425   33104 provision.go:84] configureAuth start
	I0927 17:43:38.647433   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.647695   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:38.650891   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.651352   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.651376   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.651507   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.653842   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.654158   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.654175   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.654290   33104 provision.go:143] copyHostCerts
	I0927 17:43:38.654319   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:43:38.654364   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:43:38.654376   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:43:38.654459   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:43:38.654546   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:43:38.654572   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:43:38.654581   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:43:38.654616   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:43:38.654702   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:43:38.654726   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:43:38.654735   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:43:38.654768   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:43:38.654847   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477-m03 san=[127.0.0.1 192.168.39.225 ha-748477-m03 localhost minikube]
	I0927 17:43:38.750947   33104 provision.go:177] copyRemoteCerts
	I0927 17:43:38.751001   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:43:38.751023   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.753961   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.754344   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.754372   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.754619   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.754798   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.754987   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.755087   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:38.840538   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:43:38.840622   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:43:38.865467   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:43:38.865545   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:43:38.889287   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:43:38.889354   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 17:43:38.913853   33104 provision.go:87] duration metric: took 266.415768ms to configureAuth
	I0927 17:43:38.913886   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:43:38.914119   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:38.914188   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.916953   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.917343   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.917389   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.917634   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.917835   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.918007   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.918197   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.918414   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.918567   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.918582   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:43:39.149801   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:43:39.149830   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:43:39.149841   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetURL
	I0927 17:43:39.151338   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using libvirt version 6000000
	I0927 17:43:39.154047   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.154538   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.154584   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.154757   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:43:39.154780   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:43:39.154790   33104 client.go:171] duration metric: took 24.627572253s to LocalClient.Create
	I0927 17:43:39.154853   33104 start.go:167] duration metric: took 24.627635604s to libmachine.API.Create "ha-748477"
	I0927 17:43:39.154866   33104 start.go:293] postStartSetup for "ha-748477-m03" (driver="kvm2")
	I0927 17:43:39.154874   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:43:39.154890   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.155121   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:43:39.155148   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.157417   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.157783   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.157810   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.157968   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.158151   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.158328   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.158514   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.245650   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:43:39.250017   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:43:39.250039   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:43:39.250125   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:43:39.250232   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:43:39.250246   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:43:39.250349   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:43:39.261588   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:43:39.287333   33104 start.go:296] duration metric: took 132.452339ms for postStartSetup
	I0927 17:43:39.287401   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:39.288010   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:39.291082   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.291501   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.291531   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.291849   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:39.292090   33104 start.go:128] duration metric: took 24.783356022s to createHost
	I0927 17:43:39.292116   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.294390   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.294793   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.294820   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.294965   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.295132   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.295273   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.295377   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.295501   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:39.295656   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:39.295666   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:43:39.411619   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727459019.389020724
	
	I0927 17:43:39.411648   33104 fix.go:216] guest clock: 1727459019.389020724
	I0927 17:43:39.411657   33104 fix.go:229] Guest: 2024-09-27 17:43:39.389020724 +0000 UTC Remote: 2024-09-27 17:43:39.292103608 +0000 UTC m=+148.045500714 (delta=96.917116ms)
	I0927 17:43:39.411678   33104 fix.go:200] guest clock delta is within tolerance: 96.917116ms
	I0927 17:43:39.411685   33104 start.go:83] releasing machines lock for "ha-748477-m03", held for 24.903091459s
	I0927 17:43:39.411706   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.411995   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:39.415530   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.415971   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.416001   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.418411   33104 out.go:177] * Found network options:
	I0927 17:43:39.419695   33104 out.go:177]   - NO_PROXY=192.168.39.217,192.168.39.58
	W0927 17:43:39.421098   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 17:43:39.421127   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:43:39.421146   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.421784   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.421985   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.422065   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:43:39.422102   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	W0927 17:43:39.422186   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 17:43:39.422213   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:43:39.422273   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:43:39.422290   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.425046   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425070   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425405   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.425433   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425459   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.425473   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425650   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.425656   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.425841   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.425845   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.425989   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.426058   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.426122   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.426163   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.669795   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:43:39.677634   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:43:39.677716   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:43:39.695349   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:43:39.695382   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:43:39.695446   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:43:39.715092   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:43:39.728101   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:43:39.728166   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:43:39.743124   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:43:39.759724   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:43:39.876420   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:43:40.024261   33104 docker.go:233] disabling docker service ...
	I0927 17:43:40.024330   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:43:40.038245   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:43:40.051565   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:43:40.182718   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:43:40.288143   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:43:40.301741   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:43:40.319929   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:43:40.319996   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.330123   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:43:40.330196   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.340177   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.350053   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.359649   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:43:40.370207   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.380395   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.396915   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.407460   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:43:40.418005   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:43:40.418063   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:43:40.432276   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:43:40.441789   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:43:40.568411   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:43:40.662140   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:43:40.662238   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:43:40.666515   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:43:40.666579   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:43:40.670183   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:43:40.717483   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:43:40.717566   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:43:40.748394   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:43:40.780693   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:43:40.782171   33104 out.go:177]   - env NO_PROXY=192.168.39.217
	I0927 17:43:40.783616   33104 out.go:177]   - env NO_PROXY=192.168.39.217,192.168.39.58
	I0927 17:43:40.784733   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:40.787731   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:40.788217   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:40.788253   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:40.788539   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:43:40.792731   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:43:40.806447   33104 mustload.go:65] Loading cluster: ha-748477
	I0927 17:43:40.806781   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:40.807166   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:40.807212   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:40.822513   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0927 17:43:40.823010   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:40.823465   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:40.823485   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:40.823815   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:40.824022   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:43:40.825639   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:43:40.826053   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:40.826124   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:40.841477   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I0927 17:43:40.841930   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:40.842426   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:40.842447   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:40.842805   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:40.843010   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:43:40.843186   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.225
	I0927 17:43:40.843200   33104 certs.go:194] generating shared ca certs ...
	I0927 17:43:40.843218   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:40.843371   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:43:40.843411   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:43:40.843417   33104 certs.go:256] generating profile certs ...
	I0927 17:43:40.843480   33104 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:43:40.843503   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9
	I0927 17:43:40.843516   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.225 192.168.39.254]
	I0927 17:43:41.042816   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 ...
	I0927 17:43:41.042845   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9: {Name:mkb90c985fb1d25421e8db77e70e31dc9e70f7a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:41.043004   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9 ...
	I0927 17:43:41.043015   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9: {Name:mk8a7a00dfda8086d770b62e0a97735d5734e23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:41.043080   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:43:41.043215   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:43:41.043337   33104 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:43:41.043351   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:43:41.043364   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:43:41.043379   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:43:41.043391   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:43:41.043404   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:43:41.043417   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:43:41.043428   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:43:41.066805   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:43:41.066895   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:43:41.066928   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:43:41.066939   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:43:41.066959   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:43:41.066982   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:43:41.067004   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:43:41.067043   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:43:41.067080   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.067101   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.067118   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.067151   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:43:41.070167   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:41.070759   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:43:41.070790   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:41.071003   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:43:41.071223   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:43:41.071385   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:43:41.071558   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:43:41.147059   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 17:43:41.152408   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 17:43:41.164540   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 17:43:41.168851   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 17:43:41.179537   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 17:43:41.183316   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 17:43:41.193077   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 17:43:41.197075   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0927 17:43:41.207804   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 17:43:41.211696   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 17:43:41.221742   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 17:43:41.225610   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 17:43:41.235977   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:43:41.260849   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:43:41.285062   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:43:41.309713   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:43:41.332498   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0927 17:43:41.356394   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 17:43:41.380266   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:43:41.404334   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:43:41.432122   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:43:41.455867   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:43:41.479143   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:43:41.501633   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 17:43:41.518790   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 17:43:41.534928   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 17:43:41.551854   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0927 17:43:41.568140   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 17:43:41.584545   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 17:43:41.600656   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 17:43:41.616675   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:43:41.622211   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:43:41.632889   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.637255   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.637327   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.642842   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:43:41.653070   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:43:41.663785   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.668204   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.668272   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.673573   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:43:41.686375   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:43:41.697269   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.702234   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.702308   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.707933   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:43:41.719033   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:43:41.723054   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:43:41.723112   33104 kubeadm.go:934] updating node {m03 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0927 17:43:41.723208   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:43:41.723244   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:43:41.723291   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:43:41.741075   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:43:41.741157   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:43:41.741232   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:43:41.751232   33104 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 17:43:41.751324   33104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 17:43:41.760899   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 17:43:41.760908   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 17:43:41.760931   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:43:41.760912   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 17:43:41.760955   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:43:41.760999   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:43:41.761007   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:43:41.761019   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:43:41.775995   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:43:41.776050   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 17:43:41.776070   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 17:43:41.776102   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:43:41.776118   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 17:43:41.776149   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 17:43:41.807089   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 17:43:41.807127   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 17:43:42.630057   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 17:43:42.639770   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 17:43:42.656295   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:43:42.672793   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:43:42.690976   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:43:42.694501   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:43:42.706939   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:43:42.822795   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:43:42.839249   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:43:42.839706   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:42.839761   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:42.856985   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0927 17:43:42.857497   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:42.858071   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:42.858097   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:42.858483   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:42.858728   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:43:42.858882   33104 start.go:317] joinCluster: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:43:42.858996   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 17:43:42.859017   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:43:42.862454   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:42.862936   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:43:42.862961   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:42.863106   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:43:42.863242   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:43:42.863373   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:43:42.863511   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:43:43.018533   33104 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:43:43.018576   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gg5wlb.ttkule5dhfsmakjt --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m03 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I0927 17:44:05.879368   33104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gg5wlb.ttkule5dhfsmakjt --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m03 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (22.860766617s)
	I0927 17:44:05.879405   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 17:44:06.450456   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477-m03 minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=false
	I0927 17:44:06.570812   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-748477-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 17:44:06.695756   33104 start.go:319] duration metric: took 23.836880106s to joinCluster
	I0927 17:44:06.695831   33104 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:44:06.696168   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:44:06.698664   33104 out.go:177] * Verifying Kubernetes components...
	I0927 17:44:06.700038   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:44:06.966281   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:44:06.988180   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:44:06.988494   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 17:44:06.988564   33104 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0927 17:44:06.988753   33104 node_ready.go:35] waiting up to 6m0s for node "ha-748477-m03" to be "Ready" ...
	I0927 17:44:06.988830   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:06.988838   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:06.988846   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:06.988849   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:06.992308   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:07.488982   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:07.489008   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:07.489020   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:07.489027   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:07.492583   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:07.988968   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:07.988994   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:07.989004   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:07.989011   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:07.993492   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:08.489684   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:08.489716   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:08.489726   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:08.489733   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:08.492856   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:08.989902   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:08.989923   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:08.989931   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:08.989937   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:08.994357   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:08.995455   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:09.489815   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:09.489842   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:09.489854   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:09.489860   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:09.493739   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:09.989180   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:09.989203   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:09.989211   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:09.989215   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:09.993543   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:10.489209   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:10.489234   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:10.489246   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:10.489253   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:10.492922   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:10.989208   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:10.989240   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:10.989251   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:10.989256   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:10.992477   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:11.489265   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:11.489287   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:11.489296   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:11.489304   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:11.492474   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:11.492926   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:11.989355   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:11.989380   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:11.989390   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:11.989394   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:11.992835   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:12.489471   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:12.489492   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:12.489500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:12.489504   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:12.493061   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:12.989541   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:12.989567   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:12.989575   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:12.989579   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:12.992728   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:13.489760   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:13.489793   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:13.489806   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:13.489812   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:13.497872   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:44:13.498431   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:13.989853   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:13.989880   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:13.989891   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:13.989897   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:13.993174   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:14.489807   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:14.489829   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:14.489837   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:14.489841   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:14.492717   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:14.989051   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:14.989078   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:14.989086   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:14.989090   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:14.992500   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.489879   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:15.489902   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:15.489912   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:15.489917   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:15.493620   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.989863   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:15.989886   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:15.989894   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:15.989898   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:15.993642   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.994205   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:16.489216   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:16.489238   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:16.489246   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:16.489251   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:16.492886   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:16.989910   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:16.989931   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:16.989940   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:16.989945   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:16.993350   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:17.489239   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:17.489263   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:17.489272   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:17.489276   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:17.492577   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:17.989223   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:17.989270   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:17.989278   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:17.989284   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:17.992505   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:18.489403   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:18.489430   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:18.489443   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:18.489449   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:18.492511   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:18.493206   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:18.989479   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:18.989510   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:18.989519   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:18.989524   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:18.992918   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:19.489608   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:19.489633   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:19.489641   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:19.489646   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:19.493022   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:19.989818   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:19.989842   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:19.989850   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:19.989853   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:19.993975   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:20.489504   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:20.489533   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:20.489542   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:20.489546   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:20.492731   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:20.493288   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:20.988966   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:20.988991   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:20.989000   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:20.989003   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:20.992757   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:21.489625   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:21.489646   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:21.489657   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:21.489662   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:21.493197   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:21.988951   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:21.988974   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:21.988982   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:21.988986   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:21.992564   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:22.489223   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:22.489254   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:22.489262   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:22.489270   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:22.492275   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:22.989460   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:22.989483   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:22.989493   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:22.989502   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:22.992826   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:22.993315   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:23.489736   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:23.489756   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.489764   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.489768   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.495068   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:23.989320   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:23.989345   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.989356   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.989363   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.992950   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:23.993381   33104 node_ready.go:49] node "ha-748477-m03" has status "Ready":"True"
	I0927 17:44:23.993400   33104 node_ready.go:38] duration metric: took 17.004633158s for node "ha-748477-m03" to be "Ready" ...
	I0927 17:44:23.993411   33104 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:44:23.993477   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:23.993489   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.993500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.993509   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.999279   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:24.006063   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.006162   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n99lr
	I0927 17:44:24.006171   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.006185   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.006194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.009676   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.010413   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.010431   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.010440   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.010444   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.013067   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.013609   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.013634   33104 pod_ready.go:82] duration metric: took 7.540949ms for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.013648   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.013707   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qvp2z
	I0927 17:44:24.013715   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.013723   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.013734   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.016476   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.017040   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.017054   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.017061   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.017064   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.019465   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.020063   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.020102   33104 pod_ready.go:82] duration metric: took 6.431397ms for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.020111   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.020159   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477
	I0927 17:44:24.020167   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.020173   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.020177   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.022709   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.023386   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.023403   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.023413   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.023418   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.025863   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.026254   33104 pod_ready.go:93] pod "etcd-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.026275   33104 pod_ready.go:82] duration metric: took 6.154043ms for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.026285   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.026339   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m02
	I0927 17:44:24.026349   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.026358   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.026367   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.028864   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.029549   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:24.029570   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.029581   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.029587   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.032020   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.032371   33104 pod_ready.go:93] pod "etcd-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.032386   33104 pod_ready.go:82] duration metric: took 6.091988ms for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.032394   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.189823   33104 request.go:632] Waited for 157.37468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m03
	I0927 17:44:24.189892   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m03
	I0927 17:44:24.189897   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.189904   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.189908   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.193136   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.390201   33104 request.go:632] Waited for 196.372402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:24.390286   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:24.390297   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.390308   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.390313   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.393762   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.394363   33104 pod_ready.go:93] pod "etcd-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.394381   33104 pod_ready.go:82] duration metric: took 361.981746ms for pod "etcd-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.394396   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.589922   33104 request.go:632] Waited for 195.447053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:44:24.589977   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:44:24.589984   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.589994   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.590003   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.595149   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:24.790340   33104 request.go:632] Waited for 194.372172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.790393   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.790398   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.790405   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.790410   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.794157   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.794854   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.794872   33104 pod_ready.go:82] duration metric: took 400.469945ms for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.794884   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.990005   33104 request.go:632] Waited for 195.038611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:44:24.990097   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:44:24.990106   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.990114   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.990120   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.993651   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.189611   33104 request.go:632] Waited for 195.314442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:25.189675   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:25.189682   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.189692   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.189702   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.192900   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.193483   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.193499   33104 pod_ready.go:82] duration metric: took 398.608065ms for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.193510   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.389697   33104 request.go:632] Waited for 196.11571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m03
	I0927 17:44:25.389767   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m03
	I0927 17:44:25.389774   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.389785   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.389793   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.393037   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.590215   33104 request.go:632] Waited for 196.404084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:25.590294   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:25.590304   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.590312   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.590316   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.593767   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.594384   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.594405   33104 pod_ready.go:82] duration metric: took 400.885974ms for pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.594417   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.789682   33104 request.go:632] Waited for 195.173744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:44:25.789750   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:44:25.789763   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.789771   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.789780   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.793195   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.990184   33104 request.go:632] Waited for 196.372393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:25.990247   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:25.990253   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.990260   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.990263   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.993519   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.994033   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.994056   33104 pod_ready.go:82] duration metric: took 399.631199ms for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.994070   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.190045   33104 request.go:632] Waited for 195.907906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:44:26.190131   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:44:26.190138   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.190151   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.190160   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.193660   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.389361   33104 request.go:632] Waited for 195.017885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:26.389417   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:26.389421   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.389428   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.389431   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.392564   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.393105   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:26.393124   33104 pod_ready.go:82] duration metric: took 399.046825ms for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.393133   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.589483   33104 request.go:632] Waited for 196.270592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m03
	I0927 17:44:26.589536   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m03
	I0927 17:44:26.589540   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.589548   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.589552   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.592906   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.789895   33104 request.go:632] Waited for 196.382825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:26.789947   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:26.789952   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.789961   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.789964   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.793463   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.793873   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:26.793891   33104 pod_ready.go:82] duration metric: took 400.752393ms for pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.793901   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.989945   33104 request.go:632] Waited for 195.982437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:44:26.990000   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:44:26.990005   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.990031   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.990035   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.993238   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.190379   33104 request.go:632] Waited for 196.39365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:27.190481   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:27.190488   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.190500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.190506   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.194446   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.195047   33104 pod_ready.go:93] pod "kube-proxy-kxwmh" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.195067   33104 pod_ready.go:82] duration metric: took 401.160768ms for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.195076   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.390020   33104 request.go:632] Waited for 194.886629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:44:27.390100   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:44:27.390108   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.390118   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.390144   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.393971   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.590100   33104 request.go:632] Waited for 195.421674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:27.590160   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:27.590166   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.590174   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.590180   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.593717   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.594167   33104 pod_ready.go:93] pod "kube-proxy-p76v9" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.594184   33104 pod_ready.go:82] duration metric: took 399.103012ms for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.594193   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vwkqb" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.790210   33104 request.go:632] Waited for 195.943653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vwkqb
	I0927 17:44:27.790293   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vwkqb
	I0927 17:44:27.790300   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.790312   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.790320   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.793922   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.989848   33104 request.go:632] Waited for 194.791805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:27.989907   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:27.989914   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.989923   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.989939   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.993415   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.993925   33104 pod_ready.go:93] pod "kube-proxy-vwkqb" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.993944   33104 pod_ready.go:82] duration metric: took 399.743885ms for pod "kube-proxy-vwkqb" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.993955   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.190067   33104 request.go:632] Waited for 196.037102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:44:28.190120   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:44:28.190126   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.190133   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.190138   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.193549   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.389329   33104 request.go:632] Waited for 195.18973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:28.389427   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:28.389436   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.389447   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.389459   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.392869   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.393523   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:28.393543   33104 pod_ready.go:82] duration metric: took 399.580493ms for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.393553   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.589680   33104 request.go:632] Waited for 196.059502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:44:28.589758   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:44:28.589766   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.589798   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.589812   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.593515   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.789392   33104 request.go:632] Waited for 195.298123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:28.789503   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:28.789516   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.789528   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.789539   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.792681   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.793229   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:28.793254   33104 pod_ready.go:82] duration metric: took 399.693783ms for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.793277   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.990199   33104 request.go:632] Waited for 196.858043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m03
	I0927 17:44:28.990266   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m03
	I0927 17:44:28.990272   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.990278   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.990283   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.993839   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.189981   33104 request.go:632] Waited for 195.403888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:29.190077   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:29.190088   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.190096   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.190103   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.193637   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.194214   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:29.194235   33104 pod_ready.go:82] duration metric: took 400.951036ms for pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:29.194250   33104 pod_ready.go:39] duration metric: took 5.200829097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:44:29.194265   33104 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:44:29.194320   33104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:44:29.209103   33104 api_server.go:72] duration metric: took 22.513227302s to wait for apiserver process to appear ...
	I0927 17:44:29.209147   33104 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:44:29.209171   33104 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0927 17:44:29.213508   33104 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0927 17:44:29.213572   33104 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0927 17:44:29.213579   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.213589   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.213599   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.214754   33104 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 17:44:29.214825   33104 api_server.go:141] control plane version: v1.31.1
	I0927 17:44:29.214842   33104 api_server.go:131] duration metric: took 5.68685ms to wait for apiserver health ...
	I0927 17:44:29.214854   33104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:44:29.390318   33104 request.go:632] Waited for 175.371088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.390382   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.390388   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.390394   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.390400   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.396973   33104 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0927 17:44:29.403737   33104 system_pods.go:59] 24 kube-system pods found
	I0927 17:44:29.403771   33104 system_pods.go:61] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:44:29.403776   33104 system_pods.go:61] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:44:29.403780   33104 system_pods.go:61] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:44:29.403784   33104 system_pods.go:61] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:44:29.403787   33104 system_pods.go:61] "etcd-ha-748477-m03" [cd23c252-4153-4ed3-900a-ec3ec23a0b8a] Running
	I0927 17:44:29.403790   33104 system_pods.go:61] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:44:29.403794   33104 system_pods.go:61] "kindnet-66lb8" [613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba] Running
	I0927 17:44:29.403796   33104 system_pods.go:61] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:44:29.403800   33104 system_pods.go:61] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:44:29.403806   33104 system_pods.go:61] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:44:29.403810   33104 system_pods.go:61] "kube-apiserver-ha-748477-m03" [1ca56580-06a0-4c17-bfbf-fd04ca381250] Running
	I0927 17:44:29.403814   33104 system_pods.go:61] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:44:29.403818   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:44:29.403823   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m03" [db69354c-c220-4f2a-b350-ce715009dfea] Running
	I0927 17:44:29.403827   33104 system_pods.go:61] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:44:29.403830   33104 system_pods.go:61] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:44:29.403833   33104 system_pods.go:61] "kube-proxy-vwkqb" [cee9a1cd-cce3-4e30-8bbe-1597f7ff4277] Running
	I0927 17:44:29.403836   33104 system_pods.go:61] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:44:29.403839   33104 system_pods.go:61] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:44:29.403841   33104 system_pods.go:61] "kube-scheduler-ha-748477-m03" [e9b04f8f-f820-455c-b70c-103a54bf9944] Running
	I0927 17:44:29.403845   33104 system_pods.go:61] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:44:29.403847   33104 system_pods.go:61] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:44:29.403851   33104 system_pods.go:61] "kube-vip-ha-748477-m03" [5f5c717e-5d86-4b0b-bd34-b4f8eb1f8eca] Running
	I0927 17:44:29.403853   33104 system_pods.go:61] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:44:29.403859   33104 system_pods.go:74] duration metric: took 188.99624ms to wait for pod list to return data ...
	I0927 17:44:29.403865   33104 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:44:29.590098   33104 request.go:632] Waited for 186.16112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:44:29.590155   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:44:29.590162   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.590171   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.590178   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.593809   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.593933   33104 default_sa.go:45] found service account: "default"
	I0927 17:44:29.593953   33104 default_sa.go:55] duration metric: took 190.081669ms for default service account to be created ...
	I0927 17:44:29.593963   33104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:44:29.790359   33104 request.go:632] Waited for 196.323191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.790417   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.790423   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.790430   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.790435   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.798546   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:44:29.805235   33104 system_pods.go:86] 24 kube-system pods found
	I0927 17:44:29.805269   33104 system_pods.go:89] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:44:29.805277   33104 system_pods.go:89] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:44:29.805283   33104 system_pods.go:89] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:44:29.805288   33104 system_pods.go:89] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:44:29.805293   33104 system_pods.go:89] "etcd-ha-748477-m03" [cd23c252-4153-4ed3-900a-ec3ec23a0b8a] Running
	I0927 17:44:29.805299   33104 system_pods.go:89] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:44:29.805304   33104 system_pods.go:89] "kindnet-66lb8" [613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba] Running
	I0927 17:44:29.805309   33104 system_pods.go:89] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:44:29.805315   33104 system_pods.go:89] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:44:29.805321   33104 system_pods.go:89] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:44:29.805328   33104 system_pods.go:89] "kube-apiserver-ha-748477-m03" [1ca56580-06a0-4c17-bfbf-fd04ca381250] Running
	I0927 17:44:29.805337   33104 system_pods.go:89] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:44:29.805352   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:44:29.805358   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m03" [db69354c-c220-4f2a-b350-ce715009dfea] Running
	I0927 17:44:29.805364   33104 system_pods.go:89] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:44:29.805371   33104 system_pods.go:89] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:44:29.805379   33104 system_pods.go:89] "kube-proxy-vwkqb" [cee9a1cd-cce3-4e30-8bbe-1597f7ff4277] Running
	I0927 17:44:29.805386   33104 system_pods.go:89] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:44:29.805394   33104 system_pods.go:89] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:44:29.805400   33104 system_pods.go:89] "kube-scheduler-ha-748477-m03" [e9b04f8f-f820-455c-b70c-103a54bf9944] Running
	I0927 17:44:29.805408   33104 system_pods.go:89] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:44:29.805414   33104 system_pods.go:89] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:44:29.805421   33104 system_pods.go:89] "kube-vip-ha-748477-m03" [5f5c717e-5d86-4b0b-bd34-b4f8eb1f8eca] Running
	I0927 17:44:29.805427   33104 system_pods.go:89] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:44:29.805437   33104 system_pods.go:126] duration metric: took 211.464032ms to wait for k8s-apps to be running ...
	I0927 17:44:29.805449   33104 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:44:29.805501   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:44:29.820712   33104 system_svc.go:56] duration metric: took 15.24207ms WaitForService to wait for kubelet
	I0927 17:44:29.820739   33104 kubeadm.go:582] duration metric: took 23.124868861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:44:29.820756   33104 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:44:29.990257   33104 request.go:632] Waited for 169.421001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0927 17:44:29.990309   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0927 17:44:29.990315   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.990322   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.990328   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.994594   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:29.995485   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995514   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995525   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995529   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995532   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995536   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995540   33104 node_conditions.go:105] duration metric: took 174.779797ms to run NodePressure ...
	I0927 17:44:29.995551   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:44:29.995569   33104 start.go:255] writing updated cluster config ...
	I0927 17:44:29.995843   33104 ssh_runner.go:195] Run: rm -f paused
	I0927 17:44:30.046784   33104 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 17:44:30.049020   33104 out.go:177] * Done! kubectl is now configured to use "ha-748477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.546404905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459296546380666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81c3413a-1141-4787-8130-b6b41ed07204 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.546937374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d502fa4b-0a90-491e-975e-66ec162ed39a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.546998289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d502fa4b-0a90-491e-975e-66ec162ed39a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.547296652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d502fa4b-0a90-491e-975e-66ec162ed39a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.583725148Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=722a9750-fea4-4f14-8bb0-0aac8fdd5e0f name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.583798356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=722a9750-fea4-4f14-8bb0-0aac8fdd5e0f name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.584766648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8df19f0-342b-43ae-b7b5-cf40341e2d57 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.585159977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459296585139757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8df19f0-342b-43ae-b7b5-cf40341e2d57 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.585659785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff509e86-cc0d-412a-a038-fb03086ca788 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.585717028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff509e86-cc0d-412a-a038-fb03086ca788 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.585997227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ff509e86-cc0d-412a-a038-fb03086ca788 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.627634149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a87334c6-4e31-44c5-8255-6930b3e4204e name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.627719759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a87334c6-4e31-44c5-8255-6930b3e4204e name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.628659898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89c317ac-72dc-4448-a33d-738f282837f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.629108848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459296629086433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89c317ac-72dc-4448-a33d-738f282837f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.629798022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f865512f-1ac3-4c5b-927f-1d2b8615031c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.629902517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f865512f-1ac3-4c5b-927f-1d2b8615031c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.630470354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f865512f-1ac3-4c5b-927f-1d2b8615031c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.684037058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52e0f3fe-8bf3-431f-8219-944636344644 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.684133750Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52e0f3fe-8bf3-431f-8219-944636344644 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.685527750Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a8715b1-1f14-4838-ad7d-25d2ee5263cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.686133801Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459296686108653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a8715b1-1f14-4838-ad7d-25d2ee5263cb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.687060514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba0a066f-aa62-4fa9-b537-8ab14a62ae48 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.687111193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba0a066f-aa62-4fa9-b537-8ab14a62ae48 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:16 ha-748477 crio[659]: time="2024-09-27 17:48:16.687366101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba0a066f-aa62-4fa9-b537-8ab14a62ae48 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82d138d00329a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   9af32827ca87e       busybox-7dff88458-j7gsn
	d07f02e11f879       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ce8d3fbc4ee43       coredns-7c65d6cfc9-qvp2z
	de0f399d2276a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   4c986f9d250c3       coredns-7c65d6cfc9-n99lr
	a7ccc536c4df9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   37067721a3573       storage-provisioner
	cd62df5a50cfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   61f84fe579fbd       kindnet-5wl4m
	42146256b0e01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   dc1e025d5f18b       kube-proxy-p76v9
	4caed5948aafe       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   48cfa3bbc5e9d       kube-vip-ha-748477
	d2acf98043067       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f25008a681435       kube-scheduler-ha-748477
	72fe2a883c95c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   9199f6af07950       etcd-ha-748477
	c7ca45fc1dbb1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9ace3b28f636e       kube-controller-manager-ha-748477
	657c5e75829c7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9ca07019cd0cf       kube-apiserver-ha-748477
	
	
	==> coredns [d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777] <==
	[INFO] 10.244.0.4:55585 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166646s
	[INFO] 10.244.0.4:56311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002436177s
	[INFO] 10.244.0.4:45590 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110873s
	[INFO] 10.244.2.2:43192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152715s
	[INFO] 10.244.2.2:44388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177447s
	[INFO] 10.244.2.2:33554 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065853s
	[INFO] 10.244.2.2:58628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162914s
	[INFO] 10.244.1.2:38819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129715s
	[INFO] 10.244.1.2:60816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097737s
	[INFO] 10.244.1.2:36546 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014954s
	[INFO] 10.244.1.2:33829 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081077s
	[INFO] 10.244.1.2:59687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088947s
	[INFO] 10.244.0.4:40268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120362s
	[INFO] 10.244.0.4:38614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077477s
	[INFO] 10.244.0.4:40222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068679s
	[INFO] 10.244.2.2:51489 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133892s
	[INFO] 10.244.1.2:34773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000265454s
	[INFO] 10.244.0.4:56542 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227377s
	[INFO] 10.244.0.4:38585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133165s
	[INFO] 10.244.2.2:32823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133184s
	[INFO] 10.244.2.2:47801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112308s
	[INFO] 10.244.2.2:52586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146231s
	[INFO] 10.244.1.2:50376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194279s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116551s
	[INFO] 10.244.1.2:45074 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069954s
	
	
	==> coredns [de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa] <==
	[INFO] 10.244.2.2:47453 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472755s
	[INFO] 10.244.1.2:51710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208951s
	[INFO] 10.244.1.2:47395 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000128476s
	[INFO] 10.244.1.2:39764 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001916816s
	[INFO] 10.244.0.4:60403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125998s
	[INFO] 10.244.0.4:36329 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177364s
	[INFO] 10.244.0.4:33684 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001089s
	[INFO] 10.244.2.2:47662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002007928s
	[INFO] 10.244.2.2:59058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158193s
	[INFO] 10.244.2.2:40790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001715411s
	[INFO] 10.244.2.2:48349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153048s
	[INFO] 10.244.1.2:55724 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002121618s
	[INFO] 10.244.1.2:41603 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096809s
	[INFO] 10.244.1.2:57083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631103s
	[INFO] 10.244.0.4:48117 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103399s
	[INFO] 10.244.2.2:56316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155752s
	[INFO] 10.244.2.2:36039 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172138s
	[INFO] 10.244.2.2:39197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113674s
	[INFO] 10.244.1.2:59834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130099s
	[INFO] 10.244.1.2:54472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087078s
	[INFO] 10.244.1.2:42463 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079936s
	[INFO] 10.244.0.4:58994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021944s
	[INFO] 10.244.0.4:50757 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135494s
	[INFO] 10.244.2.2:35416 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170114s
	[INFO] 10.244.1.2:50172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011348s
	
	
	==> describe nodes <==
	Name:               ha-748477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:41:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:42:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-748477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492d2104e50247c88ce564105fa6e436
	  System UUID:                492d2104-e502-47c8-8ce5-64105fa6e436
	  Boot ID:                    e44f404a-867d-4f4e-a185-458196aac718
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j7gsn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  kube-system                 coredns-7c65d6cfc9-n99lr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 coredns-7c65d6cfc9-qvp2z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 etcd-ha-748477                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-5wl4m                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m16s
	  kube-system                 kube-apiserver-ha-748477             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-748477    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-p76v9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-scheduler-ha-748477             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-748477                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m15s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-748477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-748477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-748477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal  NodeReady                6m4s   kubelet          Node ha-748477 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	
	
	Name:               ha-748477-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:42:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:45:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    ha-748477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a797c0b98fa454a9290261a4120ee96
	  System UUID:                1a797c0b-98fa-454a-9290-261a4120ee96
	  Boot ID:                    be8b9b76-5b30-449e-8e6a-b392c8bc637d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xmqtg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-748477-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-r9smp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-748477-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-controller-manager-ha-748477-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-kxwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-748477-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-vip-ha-748477-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m28s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m28s)  kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m28s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-748477-m02 status is now: NodeNotReady
	
	
	Name:               ha-748477-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:44:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-748477-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f10cf0e49714a128d45f579afd701d8
	  System UUID:                7f10cf0e-4971-4a12-8d45-f579afd701d8
	  Boot ID:                    8028882c-9e9e-4142-9736-fa20678b0690
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8fcc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 etcd-ha-748477-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-66lb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m15s
	  kube-system                 kube-apiserver-ha-748477-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-748477-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-vwkqb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-ha-748477-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-748477-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m15s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m15s)  kubelet          Node ha-748477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m15s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	
	
	Name:               ha-748477-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_45_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:45:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-748477-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53bc6a6bc9f74a04882f5b53ace38c50
	  System UUID:                53bc6a6b-c9f7-4a04-882f-5b53ace38c50
	  Boot ID:                    797c4344-bca4-4508-93c8-92db2f3a4663
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8kdps       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m9s
	  kube-system                 kube-proxy-t92jl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)  kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m8s                 node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-748477-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep27 17:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038191] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.766886] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.994968] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.572771] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.496309] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.056667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051200] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.195115] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.125330] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279617] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.856213] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.390156] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.062929] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.000255] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.085204] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 17:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.205900] kauditd_printk_skb: 38 callbacks suppressed
	[ +42.959337] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771] <==
	{"level":"warn","ts":"2024-09-27T17:48:16.896551Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.922530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.929904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.943885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.948581Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.959539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.966857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.973724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.977023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.980752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.987635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.994765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:16.996572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.002014Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.008615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.013244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.020076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.026613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.035084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.039802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.043436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.047643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.057616Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.067326Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:17.096624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:48:17 up 7 min,  0 users,  load average: 0.26, 0.31, 0.17
	Linux ha-748477 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f] <==
	I0927 17:47:42.266312       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:47:52.271508       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:47:52.271560       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:47:52.271730       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:47:52.271751       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:47:52.271828       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:47:52.271846       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:47:52.271909       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:47:52.271927       1 main.go:299] handling current node
	I0927 17:48:02.265005       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:48:02.265095       1 main.go:299] handling current node
	I0927 17:48:02.265110       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:48:02.265116       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:48:02.265396       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:48:02.265422       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:48:02.265476       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:48:02.265494       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:48:12.271840       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:48:12.271870       1 main.go:299] handling current node
	I0927 17:48:12.271884       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:48:12.271888       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:48:12.272009       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:48:12.272015       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:48:12.272064       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:48:12.272069       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf] <==
	W0927 17:41:54.285503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0927 17:41:54.286484       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 17:41:54.291279       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 17:41:54.388865       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 17:41:55.517839       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 17:41:55.539342       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 17:41:55.549868       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 17:41:59.140843       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 17:42:00.286046       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 17:44:36.903808       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44866: use of closed network connection
	E0927 17:44:37.083629       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44890: use of closed network connection
	E0927 17:44:37.325665       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44898: use of closed network connection
	E0927 17:44:37.513055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44922: use of closed network connection
	E0927 17:44:37.702332       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44948: use of closed network connection
	E0927 17:44:37.883878       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44974: use of closed network connection
	E0927 17:44:38.055802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44990: use of closed network connection
	E0927 17:44:38.236694       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45008: use of closed network connection
	E0927 17:44:38.403967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45026: use of closed network connection
	E0927 17:44:38.704686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45048: use of closed network connection
	E0927 17:44:38.877491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45076: use of closed network connection
	E0927 17:44:39.052837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45094: use of closed network connection
	E0927 17:44:39.232482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45114: use of closed network connection
	E0927 17:44:39.403972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45138: use of closed network connection
	E0927 17:44:39.594519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45158: use of closed network connection
	W0927 17:46:04.298556       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.225]
	
	
	==> kube-controller-manager [c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36] <==
	I0927 17:45:08.716652       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-748477-m04\" does not exist"
	I0927 17:45:08.760763       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-748477-m04" podCIDRs=["10.244.3.0/24"]
	I0927 17:45:08.760823       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:08.760843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.011937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.385318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.574027       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-748477-m04"
	I0927 17:45:09.640869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:11.430286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:11.479780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:12.942848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:12.962049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:18.969210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:29.722225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:45:29.722369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:29.743285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:31.451751       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:39.404025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:46:24.602364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:46:24.602509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:24.628682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:24.710382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.746809ms"
	I0927 17:46:24.710519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.102µs"
	I0927 17:46:26.579533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:29.873026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	
	
	==> kube-proxy [42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 17:42:01.081502       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 17:42:01.110880       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	E0927 17:42:01.111017       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:42:01.147630       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:42:01.147672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:42:01.147695       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:42:01.150196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:42:01.150782       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:42:01.150809       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:42:01.154388       1 config.go:199] "Starting service config controller"
	I0927 17:42:01.154878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:42:01.155097       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:42:01.155116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:42:01.157808       1 config.go:328] "Starting node config controller"
	I0927 17:42:01.157840       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 17:42:01.256235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:42:01.256497       1 shared_informer.go:320] Caches are synced for service config
	I0927 17:42:01.258142       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756] <==
	E0927 17:44:02.933717       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-66lb8\": pod kindnet-66lb8 is already assigned to node \"ha-748477-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-66lb8" node="ha-748477-m03"
	E0927 17:44:02.934559       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba(kube-system/kindnet-66lb8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-66lb8"
	E0927 17:44:02.935616       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-66lb8\": pod kindnet-66lb8 is already assigned to node \"ha-748477-m03\"" pod="kube-system/kindnet-66lb8"
	I0927 17:44:02.935846       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-66lb8" node="ha-748477-m03"
	E0927 17:44:02.934408       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vwkqb\": pod kube-proxy-vwkqb is already assigned to node \"ha-748477-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vwkqb" node="ha-748477-m03"
	E0927 17:44:02.938352       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cee9a1cd-cce3-4e30-8bbe-1597f7ff4277(kube-system/kube-proxy-vwkqb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vwkqb"
	E0927 17:44:02.938437       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vwkqb\": pod kube-proxy-vwkqb is already assigned to node \"ha-748477-m03\"" pod="kube-system/kube-proxy-vwkqb"
	I0927 17:44:02.938478       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vwkqb" node="ha-748477-m03"
	E0927 17:44:31.066581       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j7gsn\": pod busybox-7dff88458-j7gsn is already assigned to node \"ha-748477\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-j7gsn" node="ha-748477"
	E0927 17:44:31.066642       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07233d33-34ed-44e8-a9d5-376e1860ca0c(default/busybox-7dff88458-j7gsn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-j7gsn"
	E0927 17:44:31.066658       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j7gsn\": pod busybox-7dff88458-j7gsn is already assigned to node \"ha-748477\"" pod="default/busybox-7dff88458-j7gsn"
	I0927 17:44:31.066676       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-j7gsn" node="ha-748477"
	E0927 17:44:31.089611       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xmqtg\": pod busybox-7dff88458-xmqtg is already assigned to node \"ha-748477-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xmqtg" node="ha-748477-m02"
	E0927 17:44:31.092159       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bd416f42-71bf-42f9-8e17-921e5b35333b(default/busybox-7dff88458-xmqtg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xmqtg"
	E0927 17:44:31.092486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xmqtg\": pod busybox-7dff88458-xmqtg is already assigned to node \"ha-748477-m02\"" pod="default/busybox-7dff88458-xmqtg"
	I0927 17:44:31.092797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xmqtg" node="ha-748477-m02"
	E0927 17:44:31.312466       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-tpc4p\" not found" pod="default/busybox-7dff88458-tpc4p"
	E0927 17:45:08.782464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.782636       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8041369a-60b6-46ac-ae40-2a232d799caf(kube-system/kindnet-gls7h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gls7h"
	E0927 17:45:08.782676       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" pod="kube-system/kindnet-gls7h"
	I0927 17:45:08.782749       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.783276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:45:08.785675       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fc28a65-d0e3-476e-bc9e-ff4e9f2e85ac(kube-system/kube-proxy-z2tnx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z2tnx"
	E0927 17:45:08.785786       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" pod="kube-system/kube-proxy-z2tnx"
	I0927 17:45:08.785868       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	
	
	==> kubelet <==
	Sep 27 17:46:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:46:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:46:55 ha-748477 kubelet[1304]: E0927 17:46:55.552924    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459215552461142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:46:55 ha-748477 kubelet[1304]: E0927 17:46:55.552961    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459215552461142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:05 ha-748477 kubelet[1304]: E0927 17:47:05.554669    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459225554270054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:05 ha-748477 kubelet[1304]: E0927 17:47:05.555306    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459225554270054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:15 ha-748477 kubelet[1304]: E0927 17:47:15.557097    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459235556635818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:15 ha-748477 kubelet[1304]: E0927 17:47:15.557135    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459235556635818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:25 ha-748477 kubelet[1304]: E0927 17:47:25.559322    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459245558659945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:25 ha-748477 kubelet[1304]: E0927 17:47:25.559377    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459245558659945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:35 ha-748477 kubelet[1304]: E0927 17:47:35.561127    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459255560855912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:35 ha-748477 kubelet[1304]: E0927 17:47:35.561197    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459255560855912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:45 ha-748477 kubelet[1304]: E0927 17:47:45.563216    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459265562750178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:45 ha-748477 kubelet[1304]: E0927 17:47:45.567283    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459265562750178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.507545    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:47:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.568682    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459275568451294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.568704    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459275568451294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:05 ha-748477 kubelet[1304]: E0927 17:48:05.570034    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459285569687152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:05 ha-748477 kubelet[1304]: E0927 17:48:05.570079    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459285569687152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:15 ha-748477 kubelet[1304]: E0927 17:48:15.571710    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459295571258556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:15 ha-748477 kubelet[1304]: E0927 17:48:15.572095    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459295571258556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-748477 -n ha-748477
helpers_test.go:261: (dbg) Run:  kubectl --context ha-748477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr: (3.993043698s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-748477 -n ha-748477
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 logs -n 25: (1.391499424s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m03_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m04 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp testdata/cp-test.txt                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m04_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03:/home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m03 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-748477 node stop m02 -v=7                                                     | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-748477 node start m02 -v=7                                                    | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:41:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:41:11.282351   33104 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:41:11.282459   33104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:41:11.282464   33104 out.go:358] Setting ErrFile to fd 2...
	I0927 17:41:11.282469   33104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:41:11.282697   33104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:41:11.283272   33104 out.go:352] Setting JSON to false
	I0927 17:41:11.284134   33104 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5016,"bootTime":1727453855,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:41:11.284236   33104 start.go:139] virtualization: kvm guest
	I0927 17:41:11.286413   33104 out.go:177] * [ha-748477] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:41:11.288037   33104 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:41:11.288045   33104 notify.go:220] Checking for updates...
	I0927 17:41:11.289671   33104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:41:11.291343   33104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:11.293056   33104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.294702   33104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:41:11.296107   33104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:41:11.297727   33104 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:41:11.334964   33104 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 17:41:11.336448   33104 start.go:297] selected driver: kvm2
	I0927 17:41:11.336470   33104 start.go:901] validating driver "kvm2" against <nil>
	I0927 17:41:11.336482   33104 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:41:11.337172   33104 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:41:11.337254   33104 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 17:41:11.353494   33104 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 17:41:11.353573   33104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 17:41:11.353841   33104 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:41:11.353874   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:11.353916   33104 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0927 17:41:11.353921   33104 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0927 17:41:11.353981   33104 start.go:340] cluster config:
	{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0927 17:41:11.354070   33104 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:41:11.356133   33104 out.go:177] * Starting "ha-748477" primary control-plane node in "ha-748477" cluster
	I0927 17:41:11.357496   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:41:11.357561   33104 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 17:41:11.357574   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:41:11.357669   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:41:11.357682   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:41:11.358001   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:41:11.358028   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json: {Name:mke89db25d5d216a50900f26b95b8fd2ee54cc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:11.358189   33104 start.go:360] acquireMachinesLock for ha-748477: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:41:11.358227   33104 start.go:364] duration metric: took 22.952µs to acquireMachinesLock for "ha-748477"
	I0927 17:41:11.358249   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:41:11.358314   33104 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 17:41:11.360140   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:41:11.360316   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:11.360378   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:11.375306   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0927 17:41:11.375759   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:11.376301   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:11.376329   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:11.376675   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:11.376850   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:11.377007   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:11.377148   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:41:11.377181   33104 client.go:168] LocalClient.Create starting
	I0927 17:41:11.377218   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:41:11.377295   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:41:11.377314   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:41:11.377384   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:41:11.377413   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:41:11.377441   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:41:11.377466   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:41:11.377486   33104 main.go:141] libmachine: (ha-748477) Calling .PreCreateCheck
	I0927 17:41:11.377873   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:11.378248   33104 main.go:141] libmachine: Creating machine...
	I0927 17:41:11.378289   33104 main.go:141] libmachine: (ha-748477) Calling .Create
	I0927 17:41:11.378436   33104 main.go:141] libmachine: (ha-748477) Creating KVM machine...
	I0927 17:41:11.379983   33104 main.go:141] libmachine: (ha-748477) DBG | found existing default KVM network
	I0927 17:41:11.380694   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.380548   33127 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b50}
	I0927 17:41:11.380717   33104 main.go:141] libmachine: (ha-748477) DBG | created network xml: 
	I0927 17:41:11.380729   33104 main.go:141] libmachine: (ha-748477) DBG | <network>
	I0927 17:41:11.380736   33104 main.go:141] libmachine: (ha-748477) DBG |   <name>mk-ha-748477</name>
	I0927 17:41:11.380744   33104 main.go:141] libmachine: (ha-748477) DBG |   <dns enable='no'/>
	I0927 17:41:11.380751   33104 main.go:141] libmachine: (ha-748477) DBG |   
	I0927 17:41:11.380761   33104 main.go:141] libmachine: (ha-748477) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 17:41:11.380765   33104 main.go:141] libmachine: (ha-748477) DBG |     <dhcp>
	I0927 17:41:11.380773   33104 main.go:141] libmachine: (ha-748477) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 17:41:11.380778   33104 main.go:141] libmachine: (ha-748477) DBG |     </dhcp>
	I0927 17:41:11.380786   33104 main.go:141] libmachine: (ha-748477) DBG |   </ip>
	I0927 17:41:11.380790   33104 main.go:141] libmachine: (ha-748477) DBG |   
	I0927 17:41:11.380886   33104 main.go:141] libmachine: (ha-748477) DBG | </network>
	I0927 17:41:11.380936   33104 main.go:141] libmachine: (ha-748477) DBG | 
	I0927 17:41:11.386015   33104 main.go:141] libmachine: (ha-748477) DBG | trying to create private KVM network mk-ha-748477 192.168.39.0/24...
	I0927 17:41:11.458118   33104 main.go:141] libmachine: (ha-748477) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 ...
	I0927 17:41:11.458145   33104 main.go:141] libmachine: (ha-748477) DBG | private KVM network mk-ha-748477 192.168.39.0/24 created
	I0927 17:41:11.458158   33104 main.go:141] libmachine: (ha-748477) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:41:11.458170   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.458056   33127 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.458262   33104 main.go:141] libmachine: (ha-748477) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:41:11.695851   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.695688   33127 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa...
	I0927 17:41:11.894120   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.893958   33127 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/ha-748477.rawdisk...
	I0927 17:41:11.894152   33104 main.go:141] libmachine: (ha-748477) DBG | Writing magic tar header
	I0927 17:41:11.894162   33104 main.go:141] libmachine: (ha-748477) DBG | Writing SSH key tar header
	I0927 17:41:11.894171   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:11.894079   33127 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 ...
	I0927 17:41:11.894191   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477
	I0927 17:41:11.894234   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477 (perms=drwx------)
	I0927 17:41:11.894262   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:41:11.894278   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:41:11.894286   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:41:11.894294   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:41:11.894300   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:41:11.894308   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:41:11.894314   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:41:11.894322   33104 main.go:141] libmachine: (ha-748477) DBG | Checking permissions on dir: /home
	I0927 17:41:11.894332   33104 main.go:141] libmachine: (ha-748477) DBG | Skipping /home - not owner
	I0927 17:41:11.894350   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:41:11.894382   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:41:11.894396   33104 main.go:141] libmachine: (ha-748477) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:41:11.894409   33104 main.go:141] libmachine: (ha-748477) Creating domain...
	I0927 17:41:11.895515   33104 main.go:141] libmachine: (ha-748477) define libvirt domain using xml: 
	I0927 17:41:11.895554   33104 main.go:141] libmachine: (ha-748477) <domain type='kvm'>
	I0927 17:41:11.895564   33104 main.go:141] libmachine: (ha-748477)   <name>ha-748477</name>
	I0927 17:41:11.895570   33104 main.go:141] libmachine: (ha-748477)   <memory unit='MiB'>2200</memory>
	I0927 17:41:11.895577   33104 main.go:141] libmachine: (ha-748477)   <vcpu>2</vcpu>
	I0927 17:41:11.895582   33104 main.go:141] libmachine: (ha-748477)   <features>
	I0927 17:41:11.895589   33104 main.go:141] libmachine: (ha-748477)     <acpi/>
	I0927 17:41:11.895594   33104 main.go:141] libmachine: (ha-748477)     <apic/>
	I0927 17:41:11.895600   33104 main.go:141] libmachine: (ha-748477)     <pae/>
	I0927 17:41:11.895611   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.895618   33104 main.go:141] libmachine: (ha-748477)   </features>
	I0927 17:41:11.895625   33104 main.go:141] libmachine: (ha-748477)   <cpu mode='host-passthrough'>
	I0927 17:41:11.895636   33104 main.go:141] libmachine: (ha-748477)   
	I0927 17:41:11.895642   33104 main.go:141] libmachine: (ha-748477)   </cpu>
	I0927 17:41:11.895652   33104 main.go:141] libmachine: (ha-748477)   <os>
	I0927 17:41:11.895658   33104 main.go:141] libmachine: (ha-748477)     <type>hvm</type>
	I0927 17:41:11.895667   33104 main.go:141] libmachine: (ha-748477)     <boot dev='cdrom'/>
	I0927 17:41:11.895677   33104 main.go:141] libmachine: (ha-748477)     <boot dev='hd'/>
	I0927 17:41:11.895684   33104 main.go:141] libmachine: (ha-748477)     <bootmenu enable='no'/>
	I0927 17:41:11.895695   33104 main.go:141] libmachine: (ha-748477)   </os>
	I0927 17:41:11.895726   33104 main.go:141] libmachine: (ha-748477)   <devices>
	I0927 17:41:11.895746   33104 main.go:141] libmachine: (ha-748477)     <disk type='file' device='cdrom'>
	I0927 17:41:11.895755   33104 main.go:141] libmachine: (ha-748477)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/boot2docker.iso'/>
	I0927 17:41:11.895767   33104 main.go:141] libmachine: (ha-748477)       <target dev='hdc' bus='scsi'/>
	I0927 17:41:11.895779   33104 main.go:141] libmachine: (ha-748477)       <readonly/>
	I0927 17:41:11.895787   33104 main.go:141] libmachine: (ha-748477)     </disk>
	I0927 17:41:11.895799   33104 main.go:141] libmachine: (ha-748477)     <disk type='file' device='disk'>
	I0927 17:41:11.895810   33104 main.go:141] libmachine: (ha-748477)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:41:11.895825   33104 main.go:141] libmachine: (ha-748477)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/ha-748477.rawdisk'/>
	I0927 17:41:11.895835   33104 main.go:141] libmachine: (ha-748477)       <target dev='hda' bus='virtio'/>
	I0927 17:41:11.895843   33104 main.go:141] libmachine: (ha-748477)     </disk>
	I0927 17:41:11.895850   33104 main.go:141] libmachine: (ha-748477)     <interface type='network'>
	I0927 17:41:11.895865   33104 main.go:141] libmachine: (ha-748477)       <source network='mk-ha-748477'/>
	I0927 17:41:11.895880   33104 main.go:141] libmachine: (ha-748477)       <model type='virtio'/>
	I0927 17:41:11.895892   33104 main.go:141] libmachine: (ha-748477)     </interface>
	I0927 17:41:11.895902   33104 main.go:141] libmachine: (ha-748477)     <interface type='network'>
	I0927 17:41:11.895912   33104 main.go:141] libmachine: (ha-748477)       <source network='default'/>
	I0927 17:41:11.895923   33104 main.go:141] libmachine: (ha-748477)       <model type='virtio'/>
	I0927 17:41:11.895932   33104 main.go:141] libmachine: (ha-748477)     </interface>
	I0927 17:41:11.895944   33104 main.go:141] libmachine: (ha-748477)     <serial type='pty'>
	I0927 17:41:11.895957   33104 main.go:141] libmachine: (ha-748477)       <target port='0'/>
	I0927 17:41:11.895968   33104 main.go:141] libmachine: (ha-748477)     </serial>
	I0927 17:41:11.895990   33104 main.go:141] libmachine: (ha-748477)     <console type='pty'>
	I0927 17:41:11.896002   33104 main.go:141] libmachine: (ha-748477)       <target type='serial' port='0'/>
	I0927 17:41:11.896015   33104 main.go:141] libmachine: (ha-748477)     </console>
	I0927 17:41:11.896031   33104 main.go:141] libmachine: (ha-748477)     <rng model='virtio'>
	I0927 17:41:11.896046   33104 main.go:141] libmachine: (ha-748477)       <backend model='random'>/dev/random</backend>
	I0927 17:41:11.896060   33104 main.go:141] libmachine: (ha-748477)     </rng>
	I0927 17:41:11.896070   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.896076   33104 main.go:141] libmachine: (ha-748477)     
	I0927 17:41:11.896083   33104 main.go:141] libmachine: (ha-748477)   </devices>
	I0927 17:41:11.896087   33104 main.go:141] libmachine: (ha-748477) </domain>
	I0927 17:41:11.896095   33104 main.go:141] libmachine: (ha-748477) 
	I0927 17:41:11.900567   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:73:40:b9 in network default
	I0927 17:41:11.901061   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:11.901075   33104 main.go:141] libmachine: (ha-748477) Ensuring networks are active...
	I0927 17:41:11.901826   33104 main.go:141] libmachine: (ha-748477) Ensuring network default is active
	I0927 17:41:11.902116   33104 main.go:141] libmachine: (ha-748477) Ensuring network mk-ha-748477 is active
	I0927 17:41:11.902614   33104 main.go:141] libmachine: (ha-748477) Getting domain xml...
	I0927 17:41:11.903566   33104 main.go:141] libmachine: (ha-748477) Creating domain...
	I0927 17:41:13.125948   33104 main.go:141] libmachine: (ha-748477) Waiting to get IP...
	I0927 17:41:13.126613   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.126980   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.127001   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.126925   33127 retry.go:31] will retry after 221.741675ms: waiting for machine to come up
	I0927 17:41:13.350389   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.350866   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.350891   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.350820   33127 retry.go:31] will retry after 384.917671ms: waiting for machine to come up
	I0927 17:41:13.737469   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:13.737940   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:13.737963   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:13.737901   33127 retry.go:31] will retry after 357.409754ms: waiting for machine to come up
	I0927 17:41:14.096593   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:14.097137   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:14.097157   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:14.097100   33127 retry.go:31] will retry after 455.369509ms: waiting for machine to come up
	I0927 17:41:14.553700   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:14.554092   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:14.554138   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:14.554063   33127 retry.go:31] will retry after 555.024151ms: waiting for machine to come up
	I0927 17:41:15.111039   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:15.111576   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:15.111596   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:15.111511   33127 retry.go:31] will retry after 767.019564ms: waiting for machine to come up
	I0927 17:41:15.880561   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:15.880971   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:15.881009   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:15.880933   33127 retry.go:31] will retry after 930.894786ms: waiting for machine to come up
	I0927 17:41:16.814028   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:16.814547   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:16.814568   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:16.814503   33127 retry.go:31] will retry after 1.391282407s: waiting for machine to come up
	I0927 17:41:18.208116   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:18.208453   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:18.208476   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:18.208423   33127 retry.go:31] will retry after 1.406630844s: waiting for machine to come up
	I0927 17:41:19.617054   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:19.617491   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:19.617513   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:19.617444   33127 retry.go:31] will retry after 1.955568674s: waiting for machine to come up
	I0927 17:41:21.574672   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:21.575031   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:21.575056   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:21.574984   33127 retry.go:31] will retry after 2.462121776s: waiting for machine to come up
	I0927 17:41:24.039742   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:24.040176   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:24.040197   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:24.040139   33127 retry.go:31] will retry after 3.071571928s: waiting for machine to come up
	I0927 17:41:27.113044   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:27.113494   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:27.113522   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:27.113444   33127 retry.go:31] will retry after 3.158643907s: waiting for machine to come up
	I0927 17:41:30.273431   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:30.273901   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find current IP address of domain ha-748477 in network mk-ha-748477
	I0927 17:41:30.273928   33104 main.go:141] libmachine: (ha-748477) DBG | I0927 17:41:30.273851   33127 retry.go:31] will retry after 4.144134204s: waiting for machine to come up
	I0927 17:41:34.421621   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.421958   33104 main.go:141] libmachine: (ha-748477) Found IP for machine: 192.168.39.217
	I0927 17:41:34.421985   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has current primary IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.421995   33104 main.go:141] libmachine: (ha-748477) Reserving static IP address...
	I0927 17:41:34.422371   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find host DHCP lease matching {name: "ha-748477", mac: "52:54:00:cf:7b:81", ip: "192.168.39.217"} in network mk-ha-748477
	I0927 17:41:34.496658   33104 main.go:141] libmachine: (ha-748477) Reserved static IP address: 192.168.39.217
	I0927 17:41:34.496683   33104 main.go:141] libmachine: (ha-748477) Waiting for SSH to be available...
	I0927 17:41:34.496692   33104 main.go:141] libmachine: (ha-748477) DBG | Getting to WaitForSSH function...
	I0927 17:41:34.499481   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:34.499883   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477
	I0927 17:41:34.499908   33104 main.go:141] libmachine: (ha-748477) DBG | unable to find defined IP address of network mk-ha-748477 interface with MAC address 52:54:00:cf:7b:81
	I0927 17:41:34.500086   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH client type: external
	I0927 17:41:34.500117   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa (-rw-------)
	I0927 17:41:34.500142   33104 main.go:141] libmachine: (ha-748477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:41:34.500152   33104 main.go:141] libmachine: (ha-748477) DBG | About to run SSH command:
	I0927 17:41:34.500164   33104 main.go:141] libmachine: (ha-748477) DBG | exit 0
	I0927 17:41:34.503851   33104 main.go:141] libmachine: (ha-748477) DBG | SSH cmd err, output: exit status 255: 
	I0927 17:41:34.503922   33104 main.go:141] libmachine: (ha-748477) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 17:41:34.503936   33104 main.go:141] libmachine: (ha-748477) DBG | command : exit 0
	I0927 17:41:34.503943   33104 main.go:141] libmachine: (ha-748477) DBG | err     : exit status 255
	I0927 17:41:34.503959   33104 main.go:141] libmachine: (ha-748477) DBG | output  : 
	I0927 17:41:37.504545   33104 main.go:141] libmachine: (ha-748477) DBG | Getting to WaitForSSH function...
	I0927 17:41:37.507144   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.507648   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.507672   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.507819   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH client type: external
	I0927 17:41:37.507868   33104 main.go:141] libmachine: (ha-748477) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa (-rw-------)
	I0927 17:41:37.507900   33104 main.go:141] libmachine: (ha-748477) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:41:37.507920   33104 main.go:141] libmachine: (ha-748477) DBG | About to run SSH command:
	I0927 17:41:37.507941   33104 main.go:141] libmachine: (ha-748477) DBG | exit 0
	I0927 17:41:37.630810   33104 main.go:141] libmachine: (ha-748477) DBG | SSH cmd err, output: <nil>: 
	I0927 17:41:37.631066   33104 main.go:141] libmachine: (ha-748477) KVM machine creation complete!
	I0927 17:41:37.631372   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:37.631910   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:37.632095   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:37.632272   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:41:37.632285   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:37.633516   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:41:37.633528   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:41:37.633533   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:41:37.633550   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.635751   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.636081   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.636099   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.636220   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.636388   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.636532   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.636625   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.636778   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.636951   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.636961   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:41:37.734259   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:41:37.734293   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:41:37.734303   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.737128   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.737466   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.737495   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.737627   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.737846   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.737998   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.738153   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.738274   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.738468   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.738480   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:41:37.835159   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:41:37.835214   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:41:37.835220   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:41:37.835227   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:37.835463   33104 buildroot.go:166] provisioning hostname "ha-748477"
	I0927 17:41:37.835485   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:37.835646   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.838659   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.838974   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.838995   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.839272   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.839470   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.839648   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.839769   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.839931   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.840140   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.840159   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477 && echo "ha-748477" | sudo tee /etc/hostname
	I0927 17:41:37.952689   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:41:37.952711   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:37.955478   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.955872   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:37.955904   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:37.956089   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:37.956272   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.956442   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:37.956569   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:37.956706   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:37.956867   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:37.956881   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:41:38.063375   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:41:38.063408   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:41:38.063477   33104 buildroot.go:174] setting up certificates
	I0927 17:41:38.063491   33104 provision.go:84] configureAuth start
	I0927 17:41:38.063509   33104 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:41:38.063799   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.066439   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.066780   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.066808   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.066982   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.069059   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.069387   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.069405   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.069581   33104 provision.go:143] copyHostCerts
	I0927 17:41:38.069625   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:41:38.069666   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:41:38.069678   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:41:38.069763   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:41:38.069850   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:41:38.069876   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:41:38.069882   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:41:38.069916   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:41:38.069980   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:41:38.070006   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:41:38.070015   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:41:38.070049   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:41:38.070101   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477 san=[127.0.0.1 192.168.39.217 ha-748477 localhost minikube]
	I0927 17:41:38.147021   33104 provision.go:177] copyRemoteCerts
	I0927 17:41:38.147089   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:41:38.147110   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.149977   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.150246   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.150274   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.150432   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.150602   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.150754   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.150921   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.228142   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:41:38.228227   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:41:38.251467   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:41:38.251538   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0927 17:41:38.274370   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:41:38.274489   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:41:38.296698   33104 provision.go:87] duration metric: took 233.191722ms to configureAuth
	I0927 17:41:38.296732   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:41:38.296932   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:41:38.297016   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.299619   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.299927   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.299966   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.300128   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.300322   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.300479   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.300682   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.300851   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:38.301048   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:38.301067   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:41:38.523444   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:41:38.523472   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:41:38.523483   33104 main.go:141] libmachine: (ha-748477) Calling .GetURL
	I0927 17:41:38.524760   33104 main.go:141] libmachine: (ha-748477) DBG | Using libvirt version 6000000
	I0927 17:41:38.527048   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.527364   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.527391   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.527606   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:41:38.527637   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:41:38.527650   33104 client.go:171] duration metric: took 27.150459274s to LocalClient.Create
	I0927 17:41:38.527678   33104 start.go:167] duration metric: took 27.150528415s to libmachine.API.Create "ha-748477"
	I0927 17:41:38.527690   33104 start.go:293] postStartSetup for "ha-748477" (driver="kvm2")
	I0927 17:41:38.527705   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:41:38.527728   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.527972   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:41:38.528001   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.530216   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.530626   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.530665   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.530772   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.530924   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.531065   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.531219   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.609034   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:41:38.613222   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:41:38.613247   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:41:38.613317   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:41:38.613401   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:41:38.613411   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:41:38.613506   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:41:38.622717   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:41:38.645459   33104 start.go:296] duration metric: took 117.75234ms for postStartSetup
	I0927 17:41:38.645507   33104 main.go:141] libmachine: (ha-748477) Calling .GetConfigRaw
	I0927 17:41:38.646122   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.648685   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.648941   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.648975   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.649188   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:41:38.649458   33104 start.go:128] duration metric: took 27.291131215s to createHost
	I0927 17:41:38.649491   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.651737   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.652093   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.652119   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.652302   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.652471   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.652616   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.652728   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.652843   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:41:38.653010   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:41:38.653020   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:41:38.751064   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458898.732716995
	
	I0927 17:41:38.751086   33104 fix.go:216] guest clock: 1727458898.732716995
	I0927 17:41:38.751094   33104 fix.go:229] Guest: 2024-09-27 17:41:38.732716995 +0000 UTC Remote: 2024-09-27 17:41:38.649473144 +0000 UTC m=+27.402870254 (delta=83.243851ms)
	I0927 17:41:38.751135   33104 fix.go:200] guest clock delta is within tolerance: 83.243851ms
	I0927 17:41:38.751145   33104 start.go:83] releasing machines lock for "ha-748477", held for 27.392909773s
	I0927 17:41:38.751166   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.751423   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:38.754190   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.754506   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.754527   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.754757   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755262   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755415   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:38.755525   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:41:38.755565   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.755625   33104 ssh_runner.go:195] Run: cat /version.json
	I0927 17:41:38.755649   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:38.758113   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758305   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758445   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.758479   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758603   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.758725   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:38.758751   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:38.758761   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.758893   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:38.758901   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.759041   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:38.759038   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.759157   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:38.759261   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:38.831198   33104 ssh_runner.go:195] Run: systemctl --version
	I0927 17:41:38.870670   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:41:39.025889   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:41:39.031712   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:41:39.031797   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:41:39.047705   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:41:39.047735   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:41:39.047802   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:41:39.063366   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:41:39.077273   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:41:39.077334   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:41:39.090744   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:41:39.103931   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:41:39.214425   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:41:39.364442   33104 docker.go:233] disabling docker service ...
	I0927 17:41:39.364513   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:41:39.380260   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:41:39.394355   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:41:39.522355   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:41:39.649820   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:41:39.663016   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:41:39.680505   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:41:39.680564   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.690319   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:41:39.690383   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.699872   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.709466   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.719082   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:41:39.729267   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.739369   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.757384   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:41:39.767495   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:41:39.776770   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:41:39.776822   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:41:39.789488   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:41:39.798777   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:41:39.926081   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:41:40.015516   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:41:40.015581   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:41:40.020128   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:41:40.020188   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:41:40.023698   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:41:40.059901   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:41:40.059966   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:41:40.086976   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:41:40.115858   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:41:40.117036   33104 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:41:40.119598   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:40.119937   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:40.119968   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:40.120181   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:41:40.124032   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:41:40.135947   33104 kubeadm.go:883] updating cluster {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 17:41:40.136051   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:41:40.136092   33104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:41:40.165756   33104 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 17:41:40.165826   33104 ssh_runner.go:195] Run: which lz4
	I0927 17:41:40.169366   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0927 17:41:40.169454   33104 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 17:41:40.173416   33104 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 17:41:40.173444   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 17:41:41.416629   33104 crio.go:462] duration metric: took 1.247195052s to copy over tarball
	I0927 17:41:41.416710   33104 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 17:41:43.420793   33104 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.004054416s)
	I0927 17:41:43.420819   33104 crio.go:469] duration metric: took 2.004155312s to extract the tarball
	I0927 17:41:43.420825   33104 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 17:41:43.457422   33104 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:41:43.499761   33104 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:41:43.499782   33104 cache_images.go:84] Images are preloaded, skipping loading
	I0927 17:41:43.499792   33104 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.1 crio true true} ...
	I0927 17:41:43.499910   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:41:43.499992   33104 ssh_runner.go:195] Run: crio config
	I0927 17:41:43.543198   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:43.543224   33104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 17:41:43.543236   33104 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 17:41:43.543262   33104 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-748477 NodeName:ha-748477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 17:41:43.543436   33104 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-748477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 17:41:43.543460   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:41:43.543509   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:41:43.558812   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:41:43.558948   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:41:43.559015   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:41:43.568537   33104 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 17:41:43.568607   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 17:41:43.577953   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0927 17:41:43.593972   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:41:43.611240   33104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 17:41:43.627698   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0927 17:41:43.643839   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:41:43.647475   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:41:43.658814   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:41:43.786484   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:41:43.804054   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.217
	I0927 17:41:43.804083   33104 certs.go:194] generating shared ca certs ...
	I0927 17:41:43.804104   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:43.804286   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:41:43.804341   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:41:43.804355   33104 certs.go:256] generating profile certs ...
	I0927 17:41:43.804425   33104 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:41:43.804453   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt with IP's: []
	I0927 17:41:44.048105   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt ...
	I0927 17:41:44.048135   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt: {Name:mkd7683af781c2e3035297a91fe64cae3ec441ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.048290   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key ...
	I0927 17:41:44.048301   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key: {Name:mk936ca4ca8308f6e8f8130ae52fa2d91744c76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.048375   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce
	I0927 17:41:44.048390   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.254]
	I0927 17:41:44.272337   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce ...
	I0927 17:41:44.272368   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce: {Name:mkf1d6d3812ecb98203f4090aef1221789d1a599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.272516   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce ...
	I0927 17:41:44.272528   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce: {Name:mkb32ad35d33db5f9c4a13f60989170569fbf531 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.272591   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.3210c4ce -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:41:44.272698   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.3210c4ce -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:41:44.272754   33104 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:41:44.272768   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt with IP's: []
	I0927 17:41:44.519852   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt ...
	I0927 17:41:44.519879   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt: {Name:mk1051474491995de79f8f5636180a2c0021f95c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.520021   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key ...
	I0927 17:41:44.520031   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key: {Name:mkad9e4d33b049f5b649702366bd9b4b30c4cec0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:44.520090   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:41:44.520107   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:41:44.520117   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:41:44.520140   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:41:44.520152   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:41:44.520167   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:41:44.520179   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:41:44.520191   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:41:44.520236   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:41:44.520268   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:41:44.520279   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:41:44.520308   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:41:44.520329   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:41:44.520350   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:41:44.520386   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:41:44.520410   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.520426   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.520438   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.521064   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:41:44.546442   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:41:44.578778   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:41:44.609231   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:41:44.633930   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 17:41:44.658617   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 17:41:44.684890   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:41:44.709741   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:41:44.734927   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:41:44.758813   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:41:44.782007   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:41:44.806214   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 17:41:44.823670   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:41:44.829647   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:41:44.840856   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.846133   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.846189   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:41:44.852561   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:41:44.864442   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:41:44.875936   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.880730   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.880801   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:41:44.886623   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:41:44.897721   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:41:44.909287   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.914201   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.914262   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:41:44.920052   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:41:44.931726   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:41:44.936188   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:41:44.936247   33104 kubeadm.go:392] StartCluster: {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:41:44.936344   33104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 17:41:44.936410   33104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 17:41:44.979358   33104 cri.go:89] found id: ""
	I0927 17:41:44.979433   33104 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 17:41:44.989817   33104 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 17:41:45.002904   33104 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 17:41:45.014738   33104 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 17:41:45.014760   33104 kubeadm.go:157] found existing configuration files:
	
	I0927 17:41:45.014817   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 17:41:45.024092   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 17:41:45.024152   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 17:41:45.033904   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 17:41:45.043382   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 17:41:45.043439   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 17:41:45.052729   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 17:41:45.062303   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 17:41:45.062382   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 17:41:45.073359   33104 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 17:41:45.082763   33104 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 17:41:45.082834   33104 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 17:41:45.093349   33104 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 17:41:45.190478   33104 kubeadm.go:310] W0927 17:41:45.177079     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:41:45.191151   33104 kubeadm.go:310] W0927 17:41:45.178026     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 17:41:45.332459   33104 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 17:41:56.118950   33104 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 17:41:56.119025   33104 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 17:41:56.119141   33104 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 17:41:56.119282   33104 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 17:41:56.119422   33104 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 17:41:56.119505   33104 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 17:41:56.121450   33104 out.go:235]   - Generating certificates and keys ...
	I0927 17:41:56.121521   33104 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 17:41:56.121578   33104 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 17:41:56.121641   33104 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 17:41:56.121689   33104 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 17:41:56.121748   33104 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 17:41:56.121792   33104 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 17:41:56.121837   33104 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 17:41:56.121974   33104 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-748477 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0927 17:41:56.122044   33104 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 17:41:56.122168   33104 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-748477 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0927 17:41:56.122242   33104 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 17:41:56.122342   33104 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 17:41:56.122390   33104 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 17:41:56.122467   33104 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 17:41:56.122542   33104 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 17:41:56.122616   33104 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 17:41:56.122697   33104 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 17:41:56.122753   33104 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 17:41:56.122800   33104 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 17:41:56.122872   33104 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 17:41:56.122939   33104 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 17:41:56.124312   33104 out.go:235]   - Booting up control plane ...
	I0927 17:41:56.124416   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 17:41:56.124486   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 17:41:56.124538   33104 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 17:41:56.124665   33104 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 17:41:56.124745   33104 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 17:41:56.124780   33104 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 17:41:56.124883   33104 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 17:41:56.124963   33104 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 17:41:56.125009   33104 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.127696ms
	I0927 17:41:56.125069   33104 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 17:41:56.125115   33104 kubeadm.go:310] [api-check] The API server is healthy after 6.021061385s
	I0927 17:41:56.125196   33104 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 17:41:56.125298   33104 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 17:41:56.125379   33104 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 17:41:56.125578   33104 kubeadm.go:310] [mark-control-plane] Marking the node ha-748477 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 17:41:56.125630   33104 kubeadm.go:310] [bootstrap-token] Using token: hgqoqf.s456496vm8m19s9c
	I0927 17:41:56.127181   33104 out.go:235]   - Configuring RBAC rules ...
	I0927 17:41:56.127280   33104 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 17:41:56.127363   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 17:41:56.127490   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 17:41:56.127609   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 17:41:56.127704   33104 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 17:41:56.127779   33104 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 17:41:56.127880   33104 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 17:41:56.127917   33104 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 17:41:56.127954   33104 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 17:41:56.127960   33104 kubeadm.go:310] 
	I0927 17:41:56.128007   33104 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 17:41:56.128013   33104 kubeadm.go:310] 
	I0927 17:41:56.128079   33104 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 17:41:56.128085   33104 kubeadm.go:310] 
	I0927 17:41:56.128104   33104 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 17:41:56.128151   33104 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 17:41:56.128195   33104 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 17:41:56.128202   33104 kubeadm.go:310] 
	I0927 17:41:56.128243   33104 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 17:41:56.128249   33104 kubeadm.go:310] 
	I0927 17:41:56.128286   33104 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 17:41:56.128292   33104 kubeadm.go:310] 
	I0927 17:41:56.128338   33104 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 17:41:56.128406   33104 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 17:41:56.128466   33104 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 17:41:56.128474   33104 kubeadm.go:310] 
	I0927 17:41:56.128548   33104 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 17:41:56.128620   33104 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 17:41:56.128629   33104 kubeadm.go:310] 
	I0927 17:41:56.128700   33104 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hgqoqf.s456496vm8m19s9c \
	I0927 17:41:56.128804   33104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 \
	I0927 17:41:56.128840   33104 kubeadm.go:310] 	--control-plane 
	I0927 17:41:56.128853   33104 kubeadm.go:310] 
	I0927 17:41:56.128959   33104 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 17:41:56.128965   33104 kubeadm.go:310] 
	I0927 17:41:56.129032   33104 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hgqoqf.s456496vm8m19s9c \
	I0927 17:41:56.129135   33104 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 
	I0927 17:41:56.129145   33104 cni.go:84] Creating CNI manager for ""
	I0927 17:41:56.129152   33104 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0927 17:41:56.130873   33104 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0927 17:41:56.132138   33104 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0927 17:41:56.137758   33104 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 17:41:56.137776   33104 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0927 17:41:56.158395   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 17:41:56.545302   33104 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 17:41:56.545392   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:56.545450   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477 minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=true
	I0927 17:41:56.591362   33104 ops.go:34] apiserver oom_adj: -16
	I0927 17:41:56.760276   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:57.260604   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:57.760791   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:58.261339   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:58.760457   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.260517   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.760470   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 17:41:59.868738   33104 kubeadm.go:1113] duration metric: took 3.32341585s to wait for elevateKubeSystemPrivileges
	I0927 17:41:59.868781   33104 kubeadm.go:394] duration metric: took 14.932536309s to StartCluster
	I0927 17:41:59.868801   33104 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:59.868885   33104 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:59.869758   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:41:59.870009   33104 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:41:59.870033   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 17:41:59.870039   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:41:59.870060   33104 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 17:41:59.870153   33104 addons.go:69] Setting storage-provisioner=true in profile "ha-748477"
	I0927 17:41:59.870163   33104 addons.go:69] Setting default-storageclass=true in profile "ha-748477"
	I0927 17:41:59.870172   33104 addons.go:234] Setting addon storage-provisioner=true in "ha-748477"
	I0927 17:41:59.870182   33104 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-748477"
	I0927 17:41:59.870204   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:41:59.870252   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:41:59.870584   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.870621   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.870672   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.870714   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.886004   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44723
	I0927 17:41:59.886153   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33195
	I0927 17:41:59.886564   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.886600   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.887110   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.887133   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.887228   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.887251   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.887515   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.887575   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.887749   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.888058   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.888106   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.889954   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:41:59.890260   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 17:41:59.890780   33104 cert_rotation.go:140] Starting client certificate rotation controller
	I0927 17:41:59.891045   33104 addons.go:234] Setting addon default-storageclass=true in "ha-748477"
	I0927 17:41:59.891088   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:41:59.891458   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.891503   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.903067   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43341
	I0927 17:41:59.903643   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.904195   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.904216   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.904591   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.904788   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.906479   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:59.907260   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42313
	I0927 17:41:59.907760   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.908176   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.908198   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.908493   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.908731   33104 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 17:41:59.909071   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:41:59.909112   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:41:59.910017   33104 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:41:59.910034   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 17:41:59.910047   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:59.912776   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.913203   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:59.913230   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.913350   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:59.913531   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:59.913696   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:59.913877   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:59.924467   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44339
	I0927 17:41:59.924928   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:41:59.925397   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:41:59.925419   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:41:59.925727   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:41:59.925908   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:41:59.927570   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:41:59.927761   33104 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 17:41:59.927779   33104 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 17:41:59.927796   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:41:59.930818   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.931197   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:41:59.931223   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:41:59.931372   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:41:59.931551   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:41:59.931697   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:41:59.931825   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:41:59.972954   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 17:42:00.031245   33104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 17:42:00.108187   33104 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 17:42:00.508824   33104 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0927 17:42:00.769682   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.769710   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.769738   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.769760   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770044   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770066   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770083   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770095   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770104   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.770114   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770154   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.770162   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.770305   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770325   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770489   33104 main.go:141] libmachine: (ha-748477) DBG | Closing plugin on server side
	I0927 17:42:00.770511   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.770537   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.770589   33104 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 17:42:00.770615   33104 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 17:42:00.770734   33104 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0927 17:42:00.770749   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:00.770760   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:00.770772   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:00.784878   33104 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0927 17:42:00.785650   33104 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0927 17:42:00.785672   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:00.785684   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:00.785689   33104 round_trippers.go:473]     Content-Type: application/json
	I0927 17:42:00.785695   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:00.797693   33104 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0927 17:42:00.797883   33104 main.go:141] libmachine: Making call to close driver server
	I0927 17:42:00.797901   33104 main.go:141] libmachine: (ha-748477) Calling .Close
	I0927 17:42:00.798229   33104 main.go:141] libmachine: (ha-748477) DBG | Closing plugin on server side
	I0927 17:42:00.798283   33104 main.go:141] libmachine: Successfully made call to close driver server
	I0927 17:42:00.798298   33104 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 17:42:00.800228   33104 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 17:42:00.801634   33104 addons.go:510] duration metric: took 931.586908ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 17:42:00.801675   33104 start.go:246] waiting for cluster config update ...
	I0927 17:42:00.801692   33104 start.go:255] writing updated cluster config ...
	I0927 17:42:00.803627   33104 out.go:201] 
	I0927 17:42:00.805265   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:00.805361   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:00.807406   33104 out.go:177] * Starting "ha-748477-m02" control-plane node in "ha-748477" cluster
	I0927 17:42:00.809474   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:42:00.809516   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:42:00.809668   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:42:00.809688   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:42:00.809795   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:00.810056   33104 start.go:360] acquireMachinesLock for ha-748477-m02: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:42:00.810115   33104 start.go:364] duration metric: took 34.075µs to acquireMachinesLock for "ha-748477-m02"
	I0927 17:42:00.810139   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:00.810241   33104 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0927 17:42:00.812114   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:42:00.812247   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:00.812304   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:00.827300   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44103
	I0927 17:42:00.827815   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:00.828325   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:00.828351   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:00.828634   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:00.828813   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:00.828931   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:00.829052   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:42:00.829102   33104 client.go:168] LocalClient.Create starting
	I0927 17:42:00.829156   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:42:00.829194   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:42:00.829211   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:42:00.829254   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:42:00.829271   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:42:00.829282   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:42:00.829297   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:42:00.829305   33104 main.go:141] libmachine: (ha-748477-m02) Calling .PreCreateCheck
	I0927 17:42:00.829460   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:00.829822   33104 main.go:141] libmachine: Creating machine...
	I0927 17:42:00.829839   33104 main.go:141] libmachine: (ha-748477-m02) Calling .Create
	I0927 17:42:00.829995   33104 main.go:141] libmachine: (ha-748477-m02) Creating KVM machine...
	I0927 17:42:00.831397   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found existing default KVM network
	I0927 17:42:00.831514   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found existing private KVM network mk-ha-748477
	I0927 17:42:00.831650   33104 main.go:141] libmachine: (ha-748477-m02) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 ...
	I0927 17:42:00.831667   33104 main.go:141] libmachine: (ha-748477-m02) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:42:00.831765   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:00.831653   33474 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:42:00.831855   33104 main.go:141] libmachine: (ha-748477-m02) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:42:01.074875   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.074746   33474 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa...
	I0927 17:42:01.284394   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.284285   33474 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/ha-748477-m02.rawdisk...
	I0927 17:42:01.284431   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Writing magic tar header
	I0927 17:42:01.284445   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Writing SSH key tar header
	I0927 17:42:01.285094   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:01.284993   33474 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 ...
	I0927 17:42:01.285131   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02
	I0927 17:42:01.285145   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02 (perms=drwx------)
	I0927 17:42:01.285162   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:42:01.285184   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:42:01.285194   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:42:01.285208   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:42:01.285223   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:42:01.285233   33104 main.go:141] libmachine: (ha-748477-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:42:01.285245   33104 main.go:141] libmachine: (ha-748477-m02) Creating domain...
	I0927 17:42:01.285258   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:42:01.285272   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:42:01.285288   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:42:01.285298   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:42:01.285311   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Checking permissions on dir: /home
	I0927 17:42:01.285320   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Skipping /home - not owner
	I0927 17:42:01.286214   33104 main.go:141] libmachine: (ha-748477-m02) define libvirt domain using xml: 
	I0927 17:42:01.286236   33104 main.go:141] libmachine: (ha-748477-m02) <domain type='kvm'>
	I0927 17:42:01.286246   33104 main.go:141] libmachine: (ha-748477-m02)   <name>ha-748477-m02</name>
	I0927 17:42:01.286259   33104 main.go:141] libmachine: (ha-748477-m02)   <memory unit='MiB'>2200</memory>
	I0927 17:42:01.286286   33104 main.go:141] libmachine: (ha-748477-m02)   <vcpu>2</vcpu>
	I0927 17:42:01.286306   33104 main.go:141] libmachine: (ha-748477-m02)   <features>
	I0927 17:42:01.286319   33104 main.go:141] libmachine: (ha-748477-m02)     <acpi/>
	I0927 17:42:01.286326   33104 main.go:141] libmachine: (ha-748477-m02)     <apic/>
	I0927 17:42:01.286334   33104 main.go:141] libmachine: (ha-748477-m02)     <pae/>
	I0927 17:42:01.286340   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286348   33104 main.go:141] libmachine: (ha-748477-m02)   </features>
	I0927 17:42:01.286353   33104 main.go:141] libmachine: (ha-748477-m02)   <cpu mode='host-passthrough'>
	I0927 17:42:01.286361   33104 main.go:141] libmachine: (ha-748477-m02)   
	I0927 17:42:01.286365   33104 main.go:141] libmachine: (ha-748477-m02)   </cpu>
	I0927 17:42:01.286372   33104 main.go:141] libmachine: (ha-748477-m02)   <os>
	I0927 17:42:01.286377   33104 main.go:141] libmachine: (ha-748477-m02)     <type>hvm</type>
	I0927 17:42:01.286386   33104 main.go:141] libmachine: (ha-748477-m02)     <boot dev='cdrom'/>
	I0927 17:42:01.286396   33104 main.go:141] libmachine: (ha-748477-m02)     <boot dev='hd'/>
	I0927 17:42:01.286408   33104 main.go:141] libmachine: (ha-748477-m02)     <bootmenu enable='no'/>
	I0927 17:42:01.286417   33104 main.go:141] libmachine: (ha-748477-m02)   </os>
	I0927 17:42:01.286442   33104 main.go:141] libmachine: (ha-748477-m02)   <devices>
	I0927 17:42:01.286465   33104 main.go:141] libmachine: (ha-748477-m02)     <disk type='file' device='cdrom'>
	I0927 17:42:01.286483   33104 main.go:141] libmachine: (ha-748477-m02)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/boot2docker.iso'/>
	I0927 17:42:01.286494   33104 main.go:141] libmachine: (ha-748477-m02)       <target dev='hdc' bus='scsi'/>
	I0927 17:42:01.286503   33104 main.go:141] libmachine: (ha-748477-m02)       <readonly/>
	I0927 17:42:01.286512   33104 main.go:141] libmachine: (ha-748477-m02)     </disk>
	I0927 17:42:01.286521   33104 main.go:141] libmachine: (ha-748477-m02)     <disk type='file' device='disk'>
	I0927 17:42:01.286532   33104 main.go:141] libmachine: (ha-748477-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:42:01.286553   33104 main.go:141] libmachine: (ha-748477-m02)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/ha-748477-m02.rawdisk'/>
	I0927 17:42:01.286577   33104 main.go:141] libmachine: (ha-748477-m02)       <target dev='hda' bus='virtio'/>
	I0927 17:42:01.286589   33104 main.go:141] libmachine: (ha-748477-m02)     </disk>
	I0927 17:42:01.286596   33104 main.go:141] libmachine: (ha-748477-m02)     <interface type='network'>
	I0927 17:42:01.286606   33104 main.go:141] libmachine: (ha-748477-m02)       <source network='mk-ha-748477'/>
	I0927 17:42:01.286615   33104 main.go:141] libmachine: (ha-748477-m02)       <model type='virtio'/>
	I0927 17:42:01.286623   33104 main.go:141] libmachine: (ha-748477-m02)     </interface>
	I0927 17:42:01.286631   33104 main.go:141] libmachine: (ha-748477-m02)     <interface type='network'>
	I0927 17:42:01.286637   33104 main.go:141] libmachine: (ha-748477-m02)       <source network='default'/>
	I0927 17:42:01.286669   33104 main.go:141] libmachine: (ha-748477-m02)       <model type='virtio'/>
	I0927 17:42:01.286682   33104 main.go:141] libmachine: (ha-748477-m02)     </interface>
	I0927 17:42:01.286689   33104 main.go:141] libmachine: (ha-748477-m02)     <serial type='pty'>
	I0927 17:42:01.286700   33104 main.go:141] libmachine: (ha-748477-m02)       <target port='0'/>
	I0927 17:42:01.286710   33104 main.go:141] libmachine: (ha-748477-m02)     </serial>
	I0927 17:42:01.286718   33104 main.go:141] libmachine: (ha-748477-m02)     <console type='pty'>
	I0927 17:42:01.286745   33104 main.go:141] libmachine: (ha-748477-m02)       <target type='serial' port='0'/>
	I0927 17:42:01.286757   33104 main.go:141] libmachine: (ha-748477-m02)     </console>
	I0927 17:42:01.286769   33104 main.go:141] libmachine: (ha-748477-m02)     <rng model='virtio'>
	I0927 17:42:01.286780   33104 main.go:141] libmachine: (ha-748477-m02)       <backend model='random'>/dev/random</backend>
	I0927 17:42:01.286789   33104 main.go:141] libmachine: (ha-748477-m02)     </rng>
	I0927 17:42:01.286798   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286805   33104 main.go:141] libmachine: (ha-748477-m02)     
	I0927 17:42:01.286814   33104 main.go:141] libmachine: (ha-748477-m02)   </devices>
	I0927 17:42:01.286821   33104 main.go:141] libmachine: (ha-748477-m02) </domain>
	I0927 17:42:01.286829   33104 main.go:141] libmachine: (ha-748477-m02) 
	I0927 17:42:01.295323   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:dc:55:b0 in network default
	I0927 17:42:01.296033   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring networks are active...
	I0927 17:42:01.296060   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:01.297259   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring network default is active
	I0927 17:42:01.297652   33104 main.go:141] libmachine: (ha-748477-m02) Ensuring network mk-ha-748477 is active
	I0927 17:42:01.298102   33104 main.go:141] libmachine: (ha-748477-m02) Getting domain xml...
	I0927 17:42:01.298966   33104 main.go:141] libmachine: (ha-748477-m02) Creating domain...
	I0927 17:42:02.564561   33104 main.go:141] libmachine: (ha-748477-m02) Waiting to get IP...
	I0927 17:42:02.565309   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:02.565769   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:02.565802   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:02.565771   33474 retry.go:31] will retry after 303.737915ms: waiting for machine to come up
	I0927 17:42:02.871429   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:02.871830   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:02.871854   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:02.871802   33474 retry.go:31] will retry after 330.658569ms: waiting for machine to come up
	I0927 17:42:03.204264   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:03.204715   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:03.204739   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:03.204669   33474 retry.go:31] will retry after 480.920904ms: waiting for machine to come up
	I0927 17:42:03.687319   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:03.687901   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:03.687922   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:03.687827   33474 retry.go:31] will retry after 531.287792ms: waiting for machine to come up
	I0927 17:42:04.220560   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:04.221117   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:04.221147   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:04.221064   33474 retry.go:31] will retry after 645.559246ms: waiting for machine to come up
	I0927 17:42:04.867651   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:04.868069   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:04.868092   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:04.868034   33474 retry.go:31] will retry after 621.251066ms: waiting for machine to come up
	I0927 17:42:05.491583   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:05.492060   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:05.492081   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:05.492018   33474 retry.go:31] will retry after 1.144789742s: waiting for machine to come up
	I0927 17:42:06.638697   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:06.639055   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:06.639079   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:06.639012   33474 retry.go:31] will retry after 1.297542087s: waiting for machine to come up
	I0927 17:42:07.937857   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:07.938263   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:07.938304   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:07.938221   33474 retry.go:31] will retry after 1.728772395s: waiting for machine to come up
	I0927 17:42:09.668990   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:09.669424   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:09.669449   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:09.669386   33474 retry.go:31] will retry after 1.816616404s: waiting for machine to come up
	I0927 17:42:11.487206   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:11.487803   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:11.487830   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:11.487752   33474 retry.go:31] will retry after 2.262897527s: waiting for machine to come up
	I0927 17:42:13.751754   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:13.752138   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:13.752156   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:13.752109   33474 retry.go:31] will retry after 2.651419719s: waiting for machine to come up
	I0927 17:42:16.404625   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:16.405063   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:16.405087   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:16.405019   33474 retry.go:31] will retry after 2.90839218s: waiting for machine to come up
	I0927 17:42:19.317108   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:19.317506   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find current IP address of domain ha-748477-m02 in network mk-ha-748477
	I0927 17:42:19.317528   33104 main.go:141] libmachine: (ha-748477-m02) DBG | I0927 17:42:19.317483   33474 retry.go:31] will retry after 5.075657253s: waiting for machine to come up
	I0927 17:42:24.396494   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.396873   33104 main.go:141] libmachine: (ha-748477-m02) Found IP for machine: 192.168.39.58
	I0927 17:42:24.396891   33104 main.go:141] libmachine: (ha-748477-m02) Reserving static IP address...
	I0927 17:42:24.396899   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has current primary IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.397346   33104 main.go:141] libmachine: (ha-748477-m02) DBG | unable to find host DHCP lease matching {name: "ha-748477-m02", mac: "52:54:00:70:40:9e", ip: "192.168.39.58"} in network mk-ha-748477
	I0927 17:42:24.472936   33104 main.go:141] libmachine: (ha-748477-m02) Reserved static IP address: 192.168.39.58
	I0927 17:42:24.472971   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Getting to WaitForSSH function...
	I0927 17:42:24.472980   33104 main.go:141] libmachine: (ha-748477-m02) Waiting for SSH to be available...
	I0927 17:42:24.475305   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.475680   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.475707   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.475845   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using SSH client type: external
	I0927 17:42:24.475874   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa (-rw-------)
	I0927 17:42:24.475906   33104 main.go:141] libmachine: (ha-748477-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:42:24.475929   33104 main.go:141] libmachine: (ha-748477-m02) DBG | About to run SSH command:
	I0927 17:42:24.475966   33104 main.go:141] libmachine: (ha-748477-m02) DBG | exit 0
	I0927 17:42:24.606575   33104 main.go:141] libmachine: (ha-748477-m02) DBG | SSH cmd err, output: <nil>: 
	I0927 17:42:24.606899   33104 main.go:141] libmachine: (ha-748477-m02) KVM machine creation complete!
	I0927 17:42:24.607222   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:24.607761   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:24.607936   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:24.608087   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:42:24.608100   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetState
	I0927 17:42:24.609395   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:42:24.609407   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:42:24.609412   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:42:24.609417   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.611533   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.611868   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.611888   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.612022   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.612209   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.612399   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.612547   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.612697   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.612879   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.612890   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:42:24.725891   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:42:24.725919   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:42:24.725930   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.728630   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.728976   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.729006   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.729191   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.729340   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.729487   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.729609   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.729734   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.730028   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.730047   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:42:24.843111   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:42:24.843154   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:42:24.843160   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:42:24.843168   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:24.843396   33104 buildroot.go:166] provisioning hostname "ha-748477-m02"
	I0927 17:42:24.843419   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:24.843631   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.846504   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.847013   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.847039   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.847168   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.847341   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.847483   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.847608   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.847738   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.847896   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.847908   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477-m02 && echo "ha-748477-m02" | sudo tee /etc/hostname
	I0927 17:42:24.977249   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477-m02
	
	I0927 17:42:24.977281   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:24.980072   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.980385   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:24.980420   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:24.980605   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:24.980758   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.980898   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:24.980996   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:24.981123   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:24.981324   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:24.981348   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:42:25.103047   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:42:25.103077   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:42:25.103095   33104 buildroot.go:174] setting up certificates
	I0927 17:42:25.103105   33104 provision.go:84] configureAuth start
	I0927 17:42:25.103113   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetMachineName
	I0927 17:42:25.103329   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.105948   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.106264   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.106287   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.106466   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.109004   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.109390   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.109418   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.109562   33104 provision.go:143] copyHostCerts
	I0927 17:42:25.109608   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:42:25.109641   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:42:25.109649   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:42:25.109714   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:42:25.109782   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:42:25.109802   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:42:25.109808   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:42:25.109832   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:42:25.109873   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:42:25.109891   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:42:25.109897   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:42:25.109916   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:42:25.109964   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477-m02 san=[127.0.0.1 192.168.39.58 ha-748477-m02 localhost minikube]
	I0927 17:42:25.258618   33104 provision.go:177] copyRemoteCerts
	I0927 17:42:25.258690   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:42:25.258710   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.261212   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.261548   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.261586   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.261707   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.261895   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.262022   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.262183   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.348808   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:42:25.348876   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:42:25.372365   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:42:25.372460   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:42:25.397105   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:42:25.397179   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:42:25.422506   33104 provision.go:87] duration metric: took 319.390123ms to configureAuth
	I0927 17:42:25.422532   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:42:25.422731   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:25.422799   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.425981   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.426408   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.426451   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.426606   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.426811   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.426969   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.427088   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.427226   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:25.427394   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:25.427408   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:42:25.661521   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:42:25.661549   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:42:25.661558   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetURL
	I0927 17:42:25.662897   33104 main.go:141] libmachine: (ha-748477-m02) DBG | Using libvirt version 6000000
	I0927 17:42:25.665077   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.665379   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.665406   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.665564   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:42:25.665578   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:42:25.665585   33104 client.go:171] duration metric: took 24.836463256s to LocalClient.Create
	I0927 17:42:25.665605   33104 start.go:167] duration metric: took 24.836555157s to libmachine.API.Create "ha-748477"
	I0927 17:42:25.665614   33104 start.go:293] postStartSetup for "ha-748477-m02" (driver="kvm2")
	I0927 17:42:25.665623   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:42:25.665638   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.665877   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:42:25.665912   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.668048   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.668346   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.668368   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.668516   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.668698   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.668825   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.668921   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.756903   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:42:25.761205   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:42:25.761239   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:42:25.761301   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:42:25.761393   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:42:25.761406   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:42:25.761506   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:42:25.771507   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:42:25.794679   33104 start.go:296] duration metric: took 129.051968ms for postStartSetup
	I0927 17:42:25.794731   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetConfigRaw
	I0927 17:42:25.795430   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.797924   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.798413   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.798536   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.798704   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:42:25.798927   33104 start.go:128] duration metric: took 24.988675406s to createHost
	I0927 17:42:25.798952   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.801621   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.801988   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.802014   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.802223   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.802493   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.802671   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.802846   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.803001   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:42:25.803176   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I0927 17:42:25.803187   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:42:25.919256   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727458945.878335898
	
	I0927 17:42:25.919284   33104 fix.go:216] guest clock: 1727458945.878335898
	I0927 17:42:25.919291   33104 fix.go:229] Guest: 2024-09-27 17:42:25.878335898 +0000 UTC Remote: 2024-09-27 17:42:25.79893912 +0000 UTC m=+74.552336236 (delta=79.396778ms)
	I0927 17:42:25.919305   33104 fix.go:200] guest clock delta is within tolerance: 79.396778ms
	I0927 17:42:25.919309   33104 start.go:83] releasing machines lock for "ha-748477-m02", held for 25.109183327s
	I0927 17:42:25.919328   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.919584   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:25.923127   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.923545   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.923567   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.925887   33104 out.go:177] * Found network options:
	I0927 17:42:25.927311   33104 out.go:177]   - NO_PROXY=192.168.39.217
	W0927 17:42:25.928478   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:42:25.928534   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929113   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929289   33104 main.go:141] libmachine: (ha-748477-m02) Calling .DriverName
	I0927 17:42:25.929384   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:42:25.929413   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	W0927 17:42:25.929520   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:42:25.929601   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:42:25.929627   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHHostname
	I0927 17:42:25.932151   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932175   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932560   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.932590   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932615   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:25.932630   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:25.932752   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.932954   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.932961   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHPort
	I0927 17:42:25.933111   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHKeyPath
	I0927 17:42:25.933120   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.933235   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetSSHUsername
	I0927 17:42:25.933296   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:25.933372   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m02/id_rsa Username:docker}
	I0927 17:42:26.183554   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:42:26.189225   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:42:26.189283   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:42:26.205357   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:42:26.205380   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:42:26.205446   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:42:26.220556   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:42:26.233593   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:42:26.233652   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:42:26.247225   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:42:26.260534   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:42:26.378535   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:42:26.534217   33104 docker.go:233] disabling docker service ...
	I0927 17:42:26.534299   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:42:26.549457   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:42:26.564190   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:42:26.685257   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:42:26.798705   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:42:26.812177   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:42:26.830049   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:42:26.830103   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.840055   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:42:26.840116   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.850116   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.860785   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.870699   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:42:26.880704   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.890585   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.908416   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:42:26.918721   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:42:26.928323   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:42:26.928384   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:42:26.941204   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:42:26.951302   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:27.079256   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:42:27.173071   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:42:27.173154   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:42:27.178109   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:42:27.178161   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:42:27.181733   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:42:27.220015   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:42:27.220101   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:42:27.248905   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:42:27.278391   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:42:27.279800   33104 out.go:177]   - env NO_PROXY=192.168.39.217
	I0927 17:42:27.281146   33104 main.go:141] libmachine: (ha-748477-m02) Calling .GetIP
	I0927 17:42:27.283736   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:27.284063   33104 main.go:141] libmachine: (ha-748477-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:40:9e", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:42:14 +0000 UTC Type:0 Mac:52:54:00:70:40:9e Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-748477-m02 Clientid:01:52:54:00:70:40:9e}
	I0927 17:42:27.284089   33104 main.go:141] libmachine: (ha-748477-m02) DBG | domain ha-748477-m02 has defined IP address 192.168.39.58 and MAC address 52:54:00:70:40:9e in network mk-ha-748477
	I0927 17:42:27.284314   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:42:27.288290   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:42:27.300052   33104 mustload.go:65] Loading cluster: ha-748477
	I0927 17:42:27.300240   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:27.300504   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:27.300539   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:27.315110   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I0927 17:42:27.315566   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:27.316043   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:27.316066   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:27.316373   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:27.316560   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:42:27.317977   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:42:27.318257   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:27.318292   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:27.332715   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41483
	I0927 17:42:27.333159   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:27.333632   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:27.333651   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:27.333971   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:27.334145   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:42:27.334286   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.58
	I0927 17:42:27.334297   33104 certs.go:194] generating shared ca certs ...
	I0927 17:42:27.334310   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.334448   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:42:27.334484   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:42:27.334493   33104 certs.go:256] generating profile certs ...
	I0927 17:42:27.334557   33104 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:42:27.334581   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3
	I0927 17:42:27.334596   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.254]
	I0927 17:42:27.465658   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 ...
	I0927 17:42:27.465688   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3: {Name:mkaab33c389419b06a9d77e9186d99602df50635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.465878   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3 ...
	I0927 17:42:27.465895   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3: {Name:mkd8c2f05dd9abfddfcaec4316f440a902331ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:42:27.465985   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.4e710fd3 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:42:27.466113   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.4e710fd3 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:42:27.466230   33104 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:42:27.466244   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:42:27.466256   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:42:27.466270   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:42:27.466282   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:42:27.466294   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:42:27.466308   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:42:27.466321   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:42:27.466333   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:42:27.466389   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:42:27.466416   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:42:27.466425   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:42:27.466444   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:42:27.466466   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:42:27.466487   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:42:27.466523   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:42:27.466547   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:42:27.466560   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:42:27.466572   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:27.466601   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:42:27.469497   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:27.469863   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:42:27.469893   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:27.470027   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:42:27.470244   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:42:27.470394   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:42:27.470523   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:42:27.543106   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 17:42:27.548154   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 17:42:27.558735   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 17:42:27.563158   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 17:42:27.573602   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 17:42:27.578182   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 17:42:27.588485   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 17:42:27.592478   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0927 17:42:27.603608   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 17:42:27.607668   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 17:42:27.620252   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 17:42:27.624885   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 17:42:27.644493   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:42:27.668339   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:42:27.691150   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:42:27.715241   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:42:27.738617   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 17:42:27.761798   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 17:42:27.784499   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:42:27.807853   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:42:27.830972   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:42:27.853871   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:42:27.876810   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:42:27.900824   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 17:42:27.917097   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 17:42:27.933218   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 17:42:27.951040   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0927 17:42:27.967600   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 17:42:27.984161   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 17:42:28.000351   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 17:42:28.016844   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:42:28.022390   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:42:28.032675   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.037756   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.037825   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:42:28.043874   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:42:28.054764   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:42:28.065690   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.070320   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.070397   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:42:28.075845   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:42:28.086186   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:42:28.096788   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.101134   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.101189   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:42:28.106935   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:42:28.117866   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:42:28.122166   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:42:28.122230   33104 kubeadm.go:934] updating node {m02 192.168.39.58 8443 v1.31.1 crio true true} ...
	I0927 17:42:28.122310   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:42:28.122340   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:42:28.122374   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:42:28.138780   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:42:28.138839   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:42:28.138889   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:42:28.148160   33104 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 17:42:28.148222   33104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 17:42:28.157728   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 17:42:28.157755   33104 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0927 17:42:28.157763   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:42:28.157776   33104 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0927 17:42:28.157830   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:42:28.161980   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 17:42:28.162007   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 17:42:29.300439   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:42:29.320131   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:42:29.320267   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:42:29.326589   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 17:42:29.326624   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 17:42:29.546925   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:42:29.547011   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:42:29.561849   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 17:42:29.561885   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 17:42:29.913564   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 17:42:29.925322   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0927 17:42:29.944272   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:42:29.964365   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:42:29.984051   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:42:29.988161   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:42:30.002830   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:30.137318   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:42:30.153192   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:42:30.153643   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:42:30.153695   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:42:30.169225   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0927 17:42:30.169762   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:42:30.170299   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:42:30.170317   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:42:30.170628   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:42:30.170823   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:42:30.170945   33104 start.go:317] joinCluster: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:42:30.171062   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 17:42:30.171085   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:42:30.174028   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:30.174526   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:42:30.174587   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:42:30.174767   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:42:30.174933   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:42:30.175042   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:42:30.175135   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:42:30.312283   33104 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:30.312328   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 65pjfr.i6bbe1dq2ien9ht7 --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m02 --control-plane --apiserver-advertise-address=192.168.39.58 --apiserver-bind-port=8443"
	I0927 17:42:51.845707   33104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 65pjfr.i6bbe1dq2ien9ht7 --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m02 --control-plane --apiserver-advertise-address=192.168.39.58 --apiserver-bind-port=8443": (21.533351476s)
	I0927 17:42:51.845746   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 17:42:52.382325   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477-m02 minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=false
	I0927 17:42:52.503362   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-748477-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 17:42:52.636002   33104 start.go:319] duration metric: took 22.465049006s to joinCluster
	I0927 17:42:52.636077   33104 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:42:52.636363   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:42:52.637939   33104 out.go:177] * Verifying Kubernetes components...
	I0927 17:42:52.639336   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:42:52.942345   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:42:52.995016   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:42:52.995348   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 17:42:52.995436   33104 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0927 17:42:52.995698   33104 node_ready.go:35] waiting up to 6m0s for node "ha-748477-m02" to be "Ready" ...
	I0927 17:42:52.995829   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:52.995840   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:52.995852   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:52.995860   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:53.010565   33104 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0927 17:42:53.496570   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:53.496600   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:53.496611   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:53.496618   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:53.501635   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:53.996537   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:53.996562   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:53.996573   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:53.996580   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.000293   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:54.496339   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:54.496367   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:54.496379   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:54.496386   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.500335   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:54.996231   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:54.996259   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:54.996267   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:54.996270   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:54.999765   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:55.000291   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:55.496156   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:55.496179   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:55.496190   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:55.496194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:55.499869   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:55.995928   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:55.995956   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:55.995967   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:55.995976   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:56.000264   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:56.496233   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:56.496262   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:56.496274   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:56.496280   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:56.508959   33104 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0927 17:42:56.996002   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:56.996027   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:56.996035   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:56.996039   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:57.000487   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:57.001143   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:57.496517   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:57.496539   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:57.496547   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:57.496551   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:57.500687   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:42:57.996942   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:57.996968   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:57.996980   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:57.996985   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:58.007878   33104 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0927 17:42:58.495950   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:58.495978   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:58.495986   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:58.495992   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:58.502154   33104 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0927 17:42:58.995965   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:58.995987   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:58.995994   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:58.995999   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:59.120906   33104 round_trippers.go:574] Response Status: 200 OK in 124 milliseconds
	I0927 17:42:59.121564   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:42:59.496878   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:59.496899   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:59.496907   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:42:59.496913   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:59.500334   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:42:59.996861   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:42:59.996891   33104 round_trippers.go:469] Request Headers:
	I0927 17:42:59.996904   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:42:59.996909   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:00.000651   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:00.496984   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:00.497010   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:00.497020   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:00.497025   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:00.501929   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:00.996193   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:00.996216   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:00.996224   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:00.996228   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:01.000081   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:01.496245   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:01.496271   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:01.496280   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:01.496289   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:01.500327   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:01.500876   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:01.996256   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:01.996293   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:01.996319   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:01.996323   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:02.000731   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:02.496770   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:02.496794   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:02.496807   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:02.496811   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:02.499906   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:02.996753   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:02.996778   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:02.996788   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:02.996794   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.000162   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:03.496074   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:03.496103   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:03.496115   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:03.496122   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.500371   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:03.500905   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:03.996146   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:03.996168   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:03.996176   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:03.996180   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:03.999817   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:04.496897   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:04.496927   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:04.496938   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:04.496946   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:04.501634   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:04.996866   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:04.996886   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:04.996894   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:04.996899   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.000028   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:05.496388   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:05.496410   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:05.496417   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.496421   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:05.501021   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:05.501573   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:05.996337   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:05.996362   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:05.996371   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:05.996376   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:05.999502   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:06.496159   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:06.496185   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:06.496196   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:06.496201   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:06.499954   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:06.996765   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:06.996784   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:06.996792   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:06.996796   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:07.000129   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:07.496829   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:07.496853   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:07.496864   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:07.496868   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:07.499884   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:07.996447   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:07.996472   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:07.996480   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:07.996485   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:08.000400   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:08.001102   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:08.496398   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:08.496428   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:08.496436   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:08.496440   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:08.499609   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:08.996547   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:08.996584   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:08.996595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:08.996600   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:09.000044   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:09.495922   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:09.495945   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:09.495953   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:09.495957   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:09.500237   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:09.996168   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:09.996191   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:09.996199   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:09.996202   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.000717   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:10.001176   33104 node_ready.go:53] node "ha-748477-m02" has status "Ready":"False"
	I0927 17:43:10.496022   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:10.496057   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.496065   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.496068   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.500059   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.500678   33104 node_ready.go:49] node "ha-748477-m02" has status "Ready":"True"
	I0927 17:43:10.500698   33104 node_ready.go:38] duration metric: took 17.504959286s for node "ha-748477-m02" to be "Ready" ...
	I0927 17:43:10.500708   33104 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:43:10.500784   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:10.500794   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.500801   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.500807   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.509536   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:43:10.516733   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.516818   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n99lr
	I0927 17:43:10.516827   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.516834   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.516839   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.520256   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.520854   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.520869   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.520876   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.520880   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.523812   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.524358   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.524373   33104 pod_ready.go:82] duration metric: took 7.610815ms for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.524381   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.524430   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qvp2z
	I0927 17:43:10.524439   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.524446   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.524450   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.527923   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.528592   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.528607   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.528614   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.528619   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.531438   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.532103   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.532118   33104 pod_ready.go:82] duration metric: took 7.732114ms for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.532126   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.532176   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477
	I0927 17:43:10.532184   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.532190   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.532194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.534800   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.535485   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.535500   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.535508   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.535514   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.539175   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.539692   33104 pod_ready.go:93] pod "etcd-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.539712   33104 pod_ready.go:82] duration metric: took 7.578916ms for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.539724   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.539792   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m02
	I0927 17:43:10.539803   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.539813   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.539818   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.542127   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.542656   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:10.542672   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.542680   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.542687   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.545034   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:10.545710   33104 pod_ready.go:93] pod "etcd-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.545724   33104 pod_ready.go:82] duration metric: took 5.993851ms for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.545736   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.697130   33104 request.go:632] Waited for 151.318503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:43:10.697197   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:43:10.697205   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.697216   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.697225   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.700810   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.896840   33104 request.go:632] Waited for 195.326418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.896917   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:10.896923   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:10.896933   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:10.896941   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:10.900668   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:10.901151   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:10.901172   33104 pod_ready.go:82] duration metric: took 355.430016ms for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:10.901182   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.096351   33104 request.go:632] Waited for 195.090932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:43:11.096408   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:43:11.096414   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.096422   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.096425   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.099605   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:11.296522   33104 request.go:632] Waited for 196.379972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:11.296583   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:11.296588   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.296595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.296599   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.299521   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:43:11.299966   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:11.299983   33104 pod_ready.go:82] duration metric: took 398.795354ms for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.299992   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.496407   33104 request.go:632] Waited for 196.359677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:43:11.496465   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:43:11.496470   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.496478   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.496483   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.503613   33104 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0927 17:43:11.696825   33104 request.go:632] Waited for 192.418859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:11.696922   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:11.696934   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.696944   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.696952   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.700522   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:11.701092   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:11.701110   33104 pod_ready.go:82] duration metric: took 401.113109ms for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.701119   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:11.896057   33104 request.go:632] Waited for 194.879526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:43:11.896120   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:43:11.896126   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:11.896132   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:11.896136   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:11.899805   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.096909   33104 request.go:632] Waited for 196.394213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.096966   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.096971   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.096978   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.096983   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.100042   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.100632   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.100653   33104 pod_ready.go:82] duration metric: took 399.528293ms for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.100663   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.296780   33104 request.go:632] Waited for 196.049394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:43:12.296852   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:43:12.296857   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.296864   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.296868   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.300216   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.497120   33104 request.go:632] Waited for 195.887177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.497190   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:12.497198   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.497208   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.497214   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.500765   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.501287   33104 pod_ready.go:93] pod "kube-proxy-kxwmh" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.501308   33104 pod_ready.go:82] duration metric: took 400.639485ms for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.501318   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.696369   33104 request.go:632] Waited for 194.968904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:43:12.696426   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:43:12.696431   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.696440   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.696444   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.699706   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.896719   33104 request.go:632] Waited for 196.366182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:12.896803   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:12.896809   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:12.896816   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:12.896823   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:12.900077   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:12.900632   33104 pod_ready.go:93] pod "kube-proxy-p76v9" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:12.900654   33104 pod_ready.go:82] duration metric: took 399.328849ms for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:12.900664   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.096686   33104 request.go:632] Waited for 195.950266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:43:13.096742   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:43:13.096747   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.096754   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.096758   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.099788   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.296662   33104 request.go:632] Waited for 196.364642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:13.296715   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:43:13.296720   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.296727   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.296730   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.299832   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.300287   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:13.300305   33104 pod_ready.go:82] duration metric: took 399.635674ms for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.300314   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.496503   33104 request.go:632] Waited for 196.090954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:43:13.496579   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:43:13.496587   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.496595   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.496602   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.500814   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:13.697121   33104 request.go:632] Waited for 195.399465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:13.697197   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:43:13.697205   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.697216   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.697223   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.700589   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:13.701018   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:43:13.701040   33104 pod_ready.go:82] duration metric: took 400.71901ms for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:43:13.701054   33104 pod_ready.go:39] duration metric: took 3.200329427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:43:13.701073   33104 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:43:13.701127   33104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:43:13.716701   33104 api_server.go:72] duration metric: took 21.080586953s to wait for apiserver process to appear ...
	I0927 17:43:13.716724   33104 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:43:13.716745   33104 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0927 17:43:13.721063   33104 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0927 17:43:13.721136   33104 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0927 17:43:13.721142   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.721150   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.721159   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.722231   33104 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 17:43:13.722325   33104 api_server.go:141] control plane version: v1.31.1
	I0927 17:43:13.722340   33104 api_server.go:131] duration metric: took 5.610564ms to wait for apiserver health ...
	I0927 17:43:13.722347   33104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:43:13.896697   33104 request.go:632] Waited for 174.282639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:13.896775   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:13.896782   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:13.896793   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:13.896800   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:13.901747   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:13.907225   33104 system_pods.go:59] 17 kube-system pods found
	I0927 17:43:13.907254   33104 system_pods.go:61] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:43:13.907259   33104 system_pods.go:61] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:43:13.907264   33104 system_pods.go:61] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:43:13.907268   33104 system_pods.go:61] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:43:13.907271   33104 system_pods.go:61] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:43:13.907274   33104 system_pods.go:61] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:43:13.907278   33104 system_pods.go:61] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:43:13.907282   33104 system_pods.go:61] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:43:13.907285   33104 system_pods.go:61] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:43:13.907288   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:43:13.907293   33104 system_pods.go:61] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:43:13.907296   33104 system_pods.go:61] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:43:13.907302   33104 system_pods.go:61] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:43:13.907305   33104 system_pods.go:61] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:43:13.907308   33104 system_pods.go:61] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:43:13.907311   33104 system_pods.go:61] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:43:13.907314   33104 system_pods.go:61] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:43:13.907321   33104 system_pods.go:74] duration metric: took 184.96747ms to wait for pod list to return data ...
	I0927 17:43:13.907331   33104 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:43:14.096832   33104 request.go:632] Waited for 189.427057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:43:14.096891   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:43:14.096897   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.096905   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.096909   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.100749   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:14.101009   33104 default_sa.go:45] found service account: "default"
	I0927 17:43:14.101029   33104 default_sa.go:55] duration metric: took 193.692837ms for default service account to be created ...
	I0927 17:43:14.101037   33104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:43:14.296482   33104 request.go:632] Waited for 195.378336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:14.296581   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:43:14.296592   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.296603   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.296611   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.300663   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:43:14.305343   33104 system_pods.go:86] 17 kube-system pods found
	I0927 17:43:14.305387   33104 system_pods.go:89] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:43:14.305393   33104 system_pods.go:89] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:43:14.305397   33104 system_pods.go:89] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:43:14.305401   33104 system_pods.go:89] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:43:14.305405   33104 system_pods.go:89] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:43:14.305410   33104 system_pods.go:89] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:43:14.305415   33104 system_pods.go:89] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:43:14.305419   33104 system_pods.go:89] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:43:14.305423   33104 system_pods.go:89] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:43:14.305427   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:43:14.305435   33104 system_pods.go:89] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:43:14.305438   33104 system_pods.go:89] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:43:14.305442   33104 system_pods.go:89] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:43:14.305446   33104 system_pods.go:89] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:43:14.305450   33104 system_pods.go:89] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:43:14.305454   33104 system_pods.go:89] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:43:14.305457   33104 system_pods.go:89] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:43:14.305464   33104 system_pods.go:126] duration metric: took 204.421896ms to wait for k8s-apps to be running ...
	I0927 17:43:14.305470   33104 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:43:14.305515   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:43:14.319602   33104 system_svc.go:56] duration metric: took 14.121235ms WaitForService to wait for kubelet
	I0927 17:43:14.319638   33104 kubeadm.go:582] duration metric: took 21.683524227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:43:14.319663   33104 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:43:14.497069   33104 request.go:632] Waited for 177.328804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0927 17:43:14.497147   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0927 17:43:14.497154   33104 round_trippers.go:469] Request Headers:
	I0927 17:43:14.497163   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:43:14.497168   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:43:14.500866   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:43:14.501573   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:43:14.501596   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:43:14.501610   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:43:14.501614   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:43:14.501620   33104 node_conditions.go:105] duration metric: took 181.9516ms to run NodePressure ...
	I0927 17:43:14.501634   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:43:14.501664   33104 start.go:255] writing updated cluster config ...
	I0927 17:43:14.503659   33104 out.go:201] 
	I0927 17:43:14.505222   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:14.505350   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:14.506867   33104 out.go:177] * Starting "ha-748477-m03" control-plane node in "ha-748477" cluster
	I0927 17:43:14.508071   33104 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:43:14.508097   33104 cache.go:56] Caching tarball of preloaded images
	I0927 17:43:14.508199   33104 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:43:14.508212   33104 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:43:14.508319   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:14.508514   33104 start.go:360] acquireMachinesLock for ha-748477-m03: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:43:14.508582   33104 start.go:364] duration metric: took 33.744µs to acquireMachinesLock for "ha-748477-m03"
	I0927 17:43:14.508607   33104 start.go:93] Provisioning new machine with config: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:43:14.508723   33104 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0927 17:43:14.510363   33104 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 17:43:14.510454   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:14.510494   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:14.525333   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0927 17:43:14.525777   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:14.526245   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:14.526298   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:14.526634   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:14.526863   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:14.527027   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:14.527179   33104 start.go:159] libmachine.API.Create for "ha-748477" (driver="kvm2")
	I0927 17:43:14.527207   33104 client.go:168] LocalClient.Create starting
	I0927 17:43:14.527244   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 17:43:14.527283   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:43:14.527300   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:43:14.527373   33104 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 17:43:14.527399   33104 main.go:141] libmachine: Decoding PEM data...
	I0927 17:43:14.527413   33104 main.go:141] libmachine: Parsing certificate...
	I0927 17:43:14.527437   33104 main.go:141] libmachine: Running pre-create checks...
	I0927 17:43:14.527447   33104 main.go:141] libmachine: (ha-748477-m03) Calling .PreCreateCheck
	I0927 17:43:14.527643   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:14.528097   33104 main.go:141] libmachine: Creating machine...
	I0927 17:43:14.528113   33104 main.go:141] libmachine: (ha-748477-m03) Calling .Create
	I0927 17:43:14.528262   33104 main.go:141] libmachine: (ha-748477-m03) Creating KVM machine...
	I0927 17:43:14.529473   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found existing default KVM network
	I0927 17:43:14.529581   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found existing private KVM network mk-ha-748477
	I0927 17:43:14.529722   33104 main.go:141] libmachine: (ha-748477-m03) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 ...
	I0927 17:43:14.529748   33104 main.go:141] libmachine: (ha-748477-m03) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 17:43:14.529795   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.529703   33861 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:43:14.529867   33104 main.go:141] libmachine: (ha-748477-m03) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 17:43:14.759285   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.759157   33861 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa...
	I0927 17:43:14.801359   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.801230   33861 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/ha-748477-m03.rawdisk...
	I0927 17:43:14.801398   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Writing magic tar header
	I0927 17:43:14.801441   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Writing SSH key tar header
	I0927 17:43:14.801464   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:14.801363   33861 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 ...
	I0927 17:43:14.801486   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03
	I0927 17:43:14.801542   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03 (perms=drwx------)
	I0927 17:43:14.801588   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 17:43:14.801602   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 17:43:14.801611   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:43:14.801620   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 17:43:14.801631   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 17:43:14.801640   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 17:43:14.801647   33104 main.go:141] libmachine: (ha-748477-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 17:43:14.801654   33104 main.go:141] libmachine: (ha-748477-m03) Creating domain...
	I0927 17:43:14.801662   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 17:43:14.801670   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 17:43:14.801678   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home/jenkins
	I0927 17:43:14.801683   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Checking permissions on dir: /home
	I0927 17:43:14.801690   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Skipping /home - not owner
	I0927 17:43:14.802911   33104 main.go:141] libmachine: (ha-748477-m03) define libvirt domain using xml: 
	I0927 17:43:14.802928   33104 main.go:141] libmachine: (ha-748477-m03) <domain type='kvm'>
	I0927 17:43:14.802938   33104 main.go:141] libmachine: (ha-748477-m03)   <name>ha-748477-m03</name>
	I0927 17:43:14.802946   33104 main.go:141] libmachine: (ha-748477-m03)   <memory unit='MiB'>2200</memory>
	I0927 17:43:14.802953   33104 main.go:141] libmachine: (ha-748477-m03)   <vcpu>2</vcpu>
	I0927 17:43:14.802962   33104 main.go:141] libmachine: (ha-748477-m03)   <features>
	I0927 17:43:14.802968   33104 main.go:141] libmachine: (ha-748477-m03)     <acpi/>
	I0927 17:43:14.802975   33104 main.go:141] libmachine: (ha-748477-m03)     <apic/>
	I0927 17:43:14.802985   33104 main.go:141] libmachine: (ha-748477-m03)     <pae/>
	I0927 17:43:14.802993   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803022   33104 main.go:141] libmachine: (ha-748477-m03)   </features>
	I0927 17:43:14.803039   33104 main.go:141] libmachine: (ha-748477-m03)   <cpu mode='host-passthrough'>
	I0927 17:43:14.803047   33104 main.go:141] libmachine: (ha-748477-m03)   
	I0927 17:43:14.803056   33104 main.go:141] libmachine: (ha-748477-m03)   </cpu>
	I0927 17:43:14.803062   33104 main.go:141] libmachine: (ha-748477-m03)   <os>
	I0927 17:43:14.803067   33104 main.go:141] libmachine: (ha-748477-m03)     <type>hvm</type>
	I0927 17:43:14.803073   33104 main.go:141] libmachine: (ha-748477-m03)     <boot dev='cdrom'/>
	I0927 17:43:14.803077   33104 main.go:141] libmachine: (ha-748477-m03)     <boot dev='hd'/>
	I0927 17:43:14.803084   33104 main.go:141] libmachine: (ha-748477-m03)     <bootmenu enable='no'/>
	I0927 17:43:14.803090   33104 main.go:141] libmachine: (ha-748477-m03)   </os>
	I0927 17:43:14.803095   33104 main.go:141] libmachine: (ha-748477-m03)   <devices>
	I0927 17:43:14.803102   33104 main.go:141] libmachine: (ha-748477-m03)     <disk type='file' device='cdrom'>
	I0927 17:43:14.803110   33104 main.go:141] libmachine: (ha-748477-m03)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/boot2docker.iso'/>
	I0927 17:43:14.803116   33104 main.go:141] libmachine: (ha-748477-m03)       <target dev='hdc' bus='scsi'/>
	I0927 17:43:14.803122   33104 main.go:141] libmachine: (ha-748477-m03)       <readonly/>
	I0927 17:43:14.803131   33104 main.go:141] libmachine: (ha-748477-m03)     </disk>
	I0927 17:43:14.803140   33104 main.go:141] libmachine: (ha-748477-m03)     <disk type='file' device='disk'>
	I0927 17:43:14.803152   33104 main.go:141] libmachine: (ha-748477-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 17:43:14.803173   33104 main.go:141] libmachine: (ha-748477-m03)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/ha-748477-m03.rawdisk'/>
	I0927 17:43:14.803187   33104 main.go:141] libmachine: (ha-748477-m03)       <target dev='hda' bus='virtio'/>
	I0927 17:43:14.803204   33104 main.go:141] libmachine: (ha-748477-m03)     </disk>
	I0927 17:43:14.803214   33104 main.go:141] libmachine: (ha-748477-m03)     <interface type='network'>
	I0927 17:43:14.803232   33104 main.go:141] libmachine: (ha-748477-m03)       <source network='mk-ha-748477'/>
	I0927 17:43:14.803250   33104 main.go:141] libmachine: (ha-748477-m03)       <model type='virtio'/>
	I0927 17:43:14.803301   33104 main.go:141] libmachine: (ha-748477-m03)     </interface>
	I0927 17:43:14.803324   33104 main.go:141] libmachine: (ha-748477-m03)     <interface type='network'>
	I0927 17:43:14.803338   33104 main.go:141] libmachine: (ha-748477-m03)       <source network='default'/>
	I0927 17:43:14.803347   33104 main.go:141] libmachine: (ha-748477-m03)       <model type='virtio'/>
	I0927 17:43:14.803356   33104 main.go:141] libmachine: (ha-748477-m03)     </interface>
	I0927 17:43:14.803366   33104 main.go:141] libmachine: (ha-748477-m03)     <serial type='pty'>
	I0927 17:43:14.803374   33104 main.go:141] libmachine: (ha-748477-m03)       <target port='0'/>
	I0927 17:43:14.803386   33104 main.go:141] libmachine: (ha-748477-m03)     </serial>
	I0927 17:43:14.803397   33104 main.go:141] libmachine: (ha-748477-m03)     <console type='pty'>
	I0927 17:43:14.803409   33104 main.go:141] libmachine: (ha-748477-m03)       <target type='serial' port='0'/>
	I0927 17:43:14.803420   33104 main.go:141] libmachine: (ha-748477-m03)     </console>
	I0927 17:43:14.803429   33104 main.go:141] libmachine: (ha-748477-m03)     <rng model='virtio'>
	I0927 17:43:14.803439   33104 main.go:141] libmachine: (ha-748477-m03)       <backend model='random'>/dev/random</backend>
	I0927 17:43:14.803448   33104 main.go:141] libmachine: (ha-748477-m03)     </rng>
	I0927 17:43:14.803456   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803464   33104 main.go:141] libmachine: (ha-748477-m03)     
	I0927 17:43:14.803470   33104 main.go:141] libmachine: (ha-748477-m03)   </devices>
	I0927 17:43:14.803478   33104 main.go:141] libmachine: (ha-748477-m03) </domain>
	I0927 17:43:14.803488   33104 main.go:141] libmachine: (ha-748477-m03) 
	I0927 17:43:14.809886   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:46:4f:8f in network default
	I0927 17:43:14.810424   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring networks are active...
	I0927 17:43:14.810447   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:14.811161   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring network default is active
	I0927 17:43:14.811552   33104 main.go:141] libmachine: (ha-748477-m03) Ensuring network mk-ha-748477 is active
	I0927 17:43:14.811864   33104 main.go:141] libmachine: (ha-748477-m03) Getting domain xml...
	I0927 17:43:14.812640   33104 main.go:141] libmachine: (ha-748477-m03) Creating domain...
	I0927 17:43:16.061728   33104 main.go:141] libmachine: (ha-748477-m03) Waiting to get IP...
	I0927 17:43:16.062561   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.063038   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.063058   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.062985   33861 retry.go:31] will retry after 274.225477ms: waiting for machine to come up
	I0927 17:43:16.338624   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.339183   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.339208   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.339134   33861 retry.go:31] will retry after 249.930567ms: waiting for machine to come up
	I0927 17:43:16.590699   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:16.591137   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:16.591158   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:16.591098   33861 retry.go:31] will retry after 427.975523ms: waiting for machine to come up
	I0927 17:43:17.021029   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.021704   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.021792   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.021629   33861 retry.go:31] will retry after 377.570175ms: waiting for machine to come up
	I0927 17:43:17.401315   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.401764   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.401789   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.401730   33861 retry.go:31] will retry after 480.401499ms: waiting for machine to come up
	I0927 17:43:17.883333   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:17.883876   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:17.883904   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:17.883818   33861 retry.go:31] will retry after 806.335644ms: waiting for machine to come up
	I0927 17:43:18.691641   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:18.692132   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:18.692163   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:18.692063   33861 retry.go:31] will retry after 996.155949ms: waiting for machine to come up
	I0927 17:43:19.690169   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:19.690576   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:19.690600   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:19.690536   33861 retry.go:31] will retry after 1.280499747s: waiting for machine to come up
	I0927 17:43:20.972507   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:20.972924   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:20.972949   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:20.972873   33861 retry.go:31] will retry after 1.740341439s: waiting for machine to come up
	I0927 17:43:22.715948   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:22.716453   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:22.716480   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:22.716399   33861 retry.go:31] will retry after 2.220570146s: waiting for machine to come up
	I0927 17:43:24.939094   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:24.939777   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:24.939807   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:24.939729   33861 retry.go:31] will retry after 1.898000228s: waiting for machine to come up
	I0927 17:43:26.839799   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:26.840424   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:26.840450   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:26.840370   33861 retry.go:31] will retry after 3.204742412s: waiting for machine to come up
	I0927 17:43:30.046789   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:30.047236   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:30.047261   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:30.047187   33861 retry.go:31] will retry after 3.849840599s: waiting for machine to come up
	I0927 17:43:33.899866   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:33.900417   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find current IP address of domain ha-748477-m03 in network mk-ha-748477
	I0927 17:43:33.900443   33104 main.go:141] libmachine: (ha-748477-m03) DBG | I0927 17:43:33.900384   33861 retry.go:31] will retry after 4.029402489s: waiting for machine to come up
	I0927 17:43:37.931866   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:37.932267   33104 main.go:141] libmachine: (ha-748477-m03) Found IP for machine: 192.168.39.225
	I0927 17:43:37.932289   33104 main.go:141] libmachine: (ha-748477-m03) Reserving static IP address...
	I0927 17:43:37.932301   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has current primary IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:37.932706   33104 main.go:141] libmachine: (ha-748477-m03) DBG | unable to find host DHCP lease matching {name: "ha-748477-m03", mac: "52:54:00:bf:59:33", ip: "192.168.39.225"} in network mk-ha-748477
	I0927 17:43:38.014671   33104 main.go:141] libmachine: (ha-748477-m03) Reserved static IP address: 192.168.39.225
	I0927 17:43:38.014703   33104 main.go:141] libmachine: (ha-748477-m03) Waiting for SSH to be available...
	I0927 17:43:38.014712   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Getting to WaitForSSH function...
	I0927 17:43:38.017503   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.018016   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.018038   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.018293   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using SSH client type: external
	I0927 17:43:38.018324   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa (-rw-------)
	I0927 17:43:38.018358   33104 main.go:141] libmachine: (ha-748477-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 17:43:38.018375   33104 main.go:141] libmachine: (ha-748477-m03) DBG | About to run SSH command:
	I0927 17:43:38.018391   33104 main.go:141] libmachine: (ha-748477-m03) DBG | exit 0
	I0927 17:43:38.146846   33104 main.go:141] libmachine: (ha-748477-m03) DBG | SSH cmd err, output: <nil>: 
	I0927 17:43:38.147182   33104 main.go:141] libmachine: (ha-748477-m03) KVM machine creation complete!
	I0927 17:43:38.147465   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:38.148028   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:38.148248   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:38.148515   33104 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 17:43:38.148529   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetState
	I0927 17:43:38.150026   33104 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 17:43:38.150038   33104 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 17:43:38.150043   33104 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 17:43:38.150053   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.152279   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.152703   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.152731   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.152930   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.153090   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.153241   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.153385   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.153555   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.153754   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.153768   33104 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 17:43:38.265876   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:43:38.265897   33104 main.go:141] libmachine: Detecting the provisioner...
	I0927 17:43:38.265904   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.268621   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.269047   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.269076   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.269294   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.269526   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.269745   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.269874   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.270033   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.270230   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.270243   33104 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 17:43:38.383161   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 17:43:38.383229   33104 main.go:141] libmachine: found compatible host: buildroot
	I0927 17:43:38.383244   33104 main.go:141] libmachine: Provisioning with buildroot...
	I0927 17:43:38.383259   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.383511   33104 buildroot.go:166] provisioning hostname "ha-748477-m03"
	I0927 17:43:38.383534   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.383702   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.386560   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.386936   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.386960   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.387130   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.387316   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.387515   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.387694   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.387876   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.388053   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.388066   33104 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477-m03 && echo "ha-748477-m03" | sudo tee /etc/hostname
	I0927 17:43:38.517221   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477-m03
	
	I0927 17:43:38.517257   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.520130   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.520637   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.520668   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.520845   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.521018   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.521146   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.521319   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.521531   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.521692   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.521708   33104 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:43:38.647377   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:43:38.647402   33104 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:43:38.647415   33104 buildroot.go:174] setting up certificates
	I0927 17:43:38.647425   33104 provision.go:84] configureAuth start
	I0927 17:43:38.647433   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetMachineName
	I0927 17:43:38.647695   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:38.650891   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.651352   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.651376   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.651507   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.653842   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.654158   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.654175   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.654290   33104 provision.go:143] copyHostCerts
	I0927 17:43:38.654319   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:43:38.654364   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:43:38.654376   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:43:38.654459   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:43:38.654546   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:43:38.654572   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:43:38.654581   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:43:38.654616   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:43:38.654702   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:43:38.654726   33104 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:43:38.654735   33104 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:43:38.654768   33104 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:43:38.654847   33104 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477-m03 san=[127.0.0.1 192.168.39.225 ha-748477-m03 localhost minikube]
	I0927 17:43:38.750947   33104 provision.go:177] copyRemoteCerts
	I0927 17:43:38.751001   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:43:38.751023   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.753961   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.754344   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.754372   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.754619   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.754798   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.754987   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.755087   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:38.840538   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:43:38.840622   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:43:38.865467   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:43:38.865545   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 17:43:38.889287   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:43:38.889354   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 17:43:38.913853   33104 provision.go:87] duration metric: took 266.415768ms to configureAuth
	I0927 17:43:38.913886   33104 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:43:38.914119   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:38.914188   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:38.916953   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.917343   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:38.917389   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:38.917634   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:38.917835   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.918007   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:38.918197   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:38.918414   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:38.918567   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:38.918582   33104 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:43:39.149801   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:43:39.149830   33104 main.go:141] libmachine: Checking connection to Docker...
	I0927 17:43:39.149841   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetURL
	I0927 17:43:39.151338   33104 main.go:141] libmachine: (ha-748477-m03) DBG | Using libvirt version 6000000
	I0927 17:43:39.154047   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.154538   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.154584   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.154757   33104 main.go:141] libmachine: Docker is up and running!
	I0927 17:43:39.154780   33104 main.go:141] libmachine: Reticulating splines...
	I0927 17:43:39.154790   33104 client.go:171] duration metric: took 24.627572253s to LocalClient.Create
	I0927 17:43:39.154853   33104 start.go:167] duration metric: took 24.627635604s to libmachine.API.Create "ha-748477"
	I0927 17:43:39.154866   33104 start.go:293] postStartSetup for "ha-748477-m03" (driver="kvm2")
	I0927 17:43:39.154874   33104 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:43:39.154890   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.155121   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:43:39.155148   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.157417   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.157783   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.157810   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.157968   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.158151   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.158328   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.158514   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.245650   33104 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:43:39.250017   33104 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:43:39.250039   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:43:39.250125   33104 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:43:39.250232   33104 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:43:39.250246   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:43:39.250349   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:43:39.261588   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:43:39.287333   33104 start.go:296] duration metric: took 132.452339ms for postStartSetup
	I0927 17:43:39.287401   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetConfigRaw
	I0927 17:43:39.288010   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:39.291082   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.291501   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.291531   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.291849   33104 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:43:39.292090   33104 start.go:128] duration metric: took 24.783356022s to createHost
	I0927 17:43:39.292116   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.294390   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.294793   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.294820   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.294965   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.295132   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.295273   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.295377   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.295501   33104 main.go:141] libmachine: Using SSH client type: native
	I0927 17:43:39.295656   33104 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0927 17:43:39.295666   33104 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:43:39.411619   33104 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727459019.389020724
	
	I0927 17:43:39.411648   33104 fix.go:216] guest clock: 1727459019.389020724
	I0927 17:43:39.411657   33104 fix.go:229] Guest: 2024-09-27 17:43:39.389020724 +0000 UTC Remote: 2024-09-27 17:43:39.292103608 +0000 UTC m=+148.045500714 (delta=96.917116ms)
	I0927 17:43:39.411678   33104 fix.go:200] guest clock delta is within tolerance: 96.917116ms
	I0927 17:43:39.411685   33104 start.go:83] releasing machines lock for "ha-748477-m03", held for 24.903091459s
	I0927 17:43:39.411706   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.411995   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:39.415530   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.415971   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.416001   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.418411   33104 out.go:177] * Found network options:
	I0927 17:43:39.419695   33104 out.go:177]   - NO_PROXY=192.168.39.217,192.168.39.58
	W0927 17:43:39.421098   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 17:43:39.421127   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:43:39.421146   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.421784   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.421985   33104 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:43:39.422065   33104 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:43:39.422102   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	W0927 17:43:39.422186   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	W0927 17:43:39.422213   33104 proxy.go:119] fail to check proxy env: Error ip not in block
	I0927 17:43:39.422273   33104 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:43:39.422290   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:43:39.425046   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425070   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425405   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.425433   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425459   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:39.425473   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:39.425650   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.425656   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:43:39.425841   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.425845   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:43:39.425989   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.426058   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:43:39.426122   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.426163   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:43:39.669795   33104 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:43:39.677634   33104 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:43:39.677716   33104 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:43:39.695349   33104 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 17:43:39.695382   33104 start.go:495] detecting cgroup driver to use...
	I0927 17:43:39.695446   33104 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:43:39.715092   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:43:39.728101   33104 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:43:39.728166   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:43:39.743124   33104 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:43:39.759724   33104 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:43:39.876420   33104 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:43:40.024261   33104 docker.go:233] disabling docker service ...
	I0927 17:43:40.024330   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:43:40.038245   33104 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:43:40.051565   33104 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:43:40.182718   33104 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:43:40.288143   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:43:40.301741   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:43:40.319929   33104 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:43:40.319996   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.330123   33104 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:43:40.330196   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.340177   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.350053   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.359649   33104 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:43:40.370207   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.380395   33104 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.396915   33104 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:43:40.407460   33104 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:43:40.418005   33104 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 17:43:40.418063   33104 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 17:43:40.432276   33104 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:43:40.441789   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:43:40.568411   33104 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:43:40.662140   33104 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:43:40.662238   33104 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:43:40.666515   33104 start.go:563] Will wait 60s for crictl version
	I0927 17:43:40.666579   33104 ssh_runner.go:195] Run: which crictl
	I0927 17:43:40.670183   33104 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:43:40.717483   33104 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:43:40.717566   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:43:40.748394   33104 ssh_runner.go:195] Run: crio --version
	I0927 17:43:40.780693   33104 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:43:40.782171   33104 out.go:177]   - env NO_PROXY=192.168.39.217
	I0927 17:43:40.783616   33104 out.go:177]   - env NO_PROXY=192.168.39.217,192.168.39.58
	I0927 17:43:40.784733   33104 main.go:141] libmachine: (ha-748477-m03) Calling .GetIP
	I0927 17:43:40.787731   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:40.788217   33104 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:43:40.788253   33104 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:43:40.788539   33104 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:43:40.792731   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:43:40.806447   33104 mustload.go:65] Loading cluster: ha-748477
	I0927 17:43:40.806781   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:43:40.807166   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:40.807212   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:40.822513   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0927 17:43:40.823010   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:40.823465   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:40.823485   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:40.823815   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:40.824022   33104 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:43:40.825639   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:43:40.826053   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:40.826124   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:40.841477   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I0927 17:43:40.841930   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:40.842426   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:40.842447   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:40.842805   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:40.843010   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:43:40.843186   33104 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.225
	I0927 17:43:40.843200   33104 certs.go:194] generating shared ca certs ...
	I0927 17:43:40.843218   33104 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:40.843371   33104 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:43:40.843411   33104 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:43:40.843417   33104 certs.go:256] generating profile certs ...
	I0927 17:43:40.843480   33104 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:43:40.843503   33104 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9
	I0927 17:43:40.843516   33104 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.225 192.168.39.254]
	I0927 17:43:41.042816   33104 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 ...
	I0927 17:43:41.042845   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9: {Name:mkb90c985fb1d25421e8db77e70e31dc9e70f7a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:41.043004   33104 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9 ...
	I0927 17:43:41.043015   33104 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9: {Name:mk8a7a00dfda8086d770b62e0a97735d5734e23a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:43:41.043080   33104 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.003dddf9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:43:41.043215   33104 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.003dddf9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:43:41.043337   33104 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:43:41.043351   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:43:41.043364   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:43:41.043379   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:43:41.043391   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:43:41.043404   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:43:41.043417   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:43:41.043428   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:43:41.066805   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:43:41.066895   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:43:41.066928   33104 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:43:41.066939   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:43:41.066959   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:43:41.066982   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:43:41.067004   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:43:41.067043   33104 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:43:41.067080   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.067101   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.067118   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.067151   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:43:41.070167   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:41.070759   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:43:41.070790   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:41.071003   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:43:41.071223   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:43:41.071385   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:43:41.071558   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:43:41.147059   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0927 17:43:41.152408   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0927 17:43:41.164540   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0927 17:43:41.168851   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0927 17:43:41.179537   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0927 17:43:41.183316   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0927 17:43:41.193077   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0927 17:43:41.197075   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0927 17:43:41.207804   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0927 17:43:41.211696   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0927 17:43:41.221742   33104 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0927 17:43:41.225610   33104 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0927 17:43:41.235977   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:43:41.260849   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:43:41.285062   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:43:41.309713   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:43:41.332498   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0927 17:43:41.356394   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 17:43:41.380266   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:43:41.404334   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:43:41.432122   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:43:41.455867   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:43:41.479143   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:43:41.501633   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0927 17:43:41.518790   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0927 17:43:41.534928   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0927 17:43:41.551854   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0927 17:43:41.568140   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0927 17:43:41.584545   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0927 17:43:41.600656   33104 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0927 17:43:41.616675   33104 ssh_runner.go:195] Run: openssl version
	I0927 17:43:41.622211   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:43:41.632889   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.637255   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.637327   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:43:41.642842   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:43:41.653070   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:43:41.663785   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.668204   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.668272   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:43:41.673573   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:43:41.686375   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:43:41.697269   33104 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.702234   33104 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.702308   33104 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:43:41.707933   33104 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:43:41.719033   33104 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:43:41.723054   33104 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 17:43:41.723112   33104 kubeadm.go:934] updating node {m03 192.168.39.225 8443 v1.31.1 crio true true} ...
	I0927 17:43:41.723208   33104 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:43:41.723244   33104 kube-vip.go:115] generating kube-vip config ...
	I0927 17:43:41.723291   33104 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:43:41.741075   33104 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:43:41.741157   33104 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:43:41.741232   33104 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:43:41.751232   33104 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 17:43:41.751324   33104 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 17:43:41.760899   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 17:43:41.760908   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 17:43:41.760931   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:43:41.760912   33104 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 17:43:41.760955   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:43:41.760999   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 17:43:41.761007   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:43:41.761019   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 17:43:41.775995   33104 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:43:41.776050   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0927 17:43:41.776070   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 17:43:41.776102   33104 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 17:43:41.776118   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0927 17:43:41.776149   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 17:43:41.807089   33104 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0927 17:43:41.807127   33104 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 17:43:42.630057   33104 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0927 17:43:42.639770   33104 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 17:43:42.656295   33104 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:43:42.672793   33104 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:43:42.690976   33104 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:43:42.694501   33104 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 17:43:42.706939   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:43:42.822795   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:43:42.839249   33104 host.go:66] Checking if "ha-748477" exists ...
	I0927 17:43:42.839706   33104 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:43:42.839761   33104 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:43:42.856985   33104 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0927 17:43:42.857497   33104 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:43:42.858071   33104 main.go:141] libmachine: Using API Version  1
	I0927 17:43:42.858097   33104 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:43:42.858483   33104 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:43:42.858728   33104 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:43:42.858882   33104 start.go:317] joinCluster: &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:43:42.858996   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0927 17:43:42.859017   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:43:42.862454   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:42.862936   33104 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:43:42.862961   33104 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:43:42.863106   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:43:42.863242   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:43:42.863373   33104 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:43:42.863511   33104 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:43:43.018533   33104 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:43:43.018576   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gg5wlb.ttkule5dhfsmakjt --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m03 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443"
	I0927 17:44:05.879368   33104 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gg5wlb.ttkule5dhfsmakjt --discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-748477-m03 --control-plane --apiserver-advertise-address=192.168.39.225 --apiserver-bind-port=8443": (22.860766617s)
	I0927 17:44:05.879405   33104 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0927 17:44:06.450456   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-748477-m03 minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=ha-748477 minikube.k8s.io/primary=false
	I0927 17:44:06.570812   33104 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-748477-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0927 17:44:06.695756   33104 start.go:319] duration metric: took 23.836880106s to joinCluster
	I0927 17:44:06.695831   33104 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 17:44:06.696168   33104 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:44:06.698664   33104 out.go:177] * Verifying Kubernetes components...
	I0927 17:44:06.700038   33104 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:44:06.966281   33104 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:44:06.988180   33104 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:44:06.988494   33104 kapi.go:59] client config for ha-748477: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0927 17:44:06.988564   33104 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.217:8443
	I0927 17:44:06.988753   33104 node_ready.go:35] waiting up to 6m0s for node "ha-748477-m03" to be "Ready" ...
	I0927 17:44:06.988830   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:06.988838   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:06.988846   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:06.988849   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:06.992308   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:07.488982   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:07.489008   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:07.489020   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:07.489027   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:07.492583   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:07.988968   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:07.988994   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:07.989004   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:07.989011   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:07.993492   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:08.489684   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:08.489716   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:08.489726   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:08.489733   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:08.492856   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:08.989902   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:08.989923   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:08.989931   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:08.989937   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:08.994357   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:08.995455   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:09.489815   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:09.489842   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:09.489854   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:09.489860   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:09.493739   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:09.989180   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:09.989203   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:09.989211   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:09.989215   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:09.993543   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:10.489209   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:10.489234   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:10.489246   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:10.489253   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:10.492922   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:10.989208   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:10.989240   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:10.989251   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:10.989256   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:10.992477   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:11.489265   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:11.489287   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:11.489296   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:11.489304   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:11.492474   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:11.492926   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:11.989355   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:11.989380   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:11.989390   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:11.989394   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:11.992835   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:12.489471   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:12.489492   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:12.489500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:12.489504   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:12.493061   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:12.989541   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:12.989567   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:12.989575   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:12.989579   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:12.992728   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:13.489760   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:13.489793   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:13.489806   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:13.489812   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:13.497872   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:44:13.498431   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:13.989853   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:13.989880   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:13.989891   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:13.989897   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:13.993174   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:14.489807   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:14.489829   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:14.489837   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:14.489841   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:14.492717   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:14.989051   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:14.989078   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:14.989086   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:14.989090   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:14.992500   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.489879   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:15.489902   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:15.489912   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:15.489917   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:15.493620   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.989863   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:15.989886   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:15.989894   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:15.989898   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:15.993642   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:15.994205   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:16.489216   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:16.489238   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:16.489246   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:16.489251   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:16.492886   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:16.989910   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:16.989931   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:16.989940   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:16.989945   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:16.993350   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:17.489239   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:17.489263   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:17.489272   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:17.489276   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:17.492577   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:17.989223   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:17.989270   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:17.989278   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:17.989284   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:17.992505   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:18.489403   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:18.489430   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:18.489443   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:18.489449   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:18.492511   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:18.493206   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:18.989479   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:18.989510   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:18.989519   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:18.989524   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:18.992918   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:19.489608   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:19.489633   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:19.489641   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:19.489646   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:19.493022   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:19.989818   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:19.989842   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:19.989850   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:19.989853   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:19.993975   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:20.489504   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:20.489533   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:20.489542   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:20.489546   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:20.492731   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:20.493288   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:20.988966   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:20.988991   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:20.989000   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:20.989003   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:20.992757   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:21.489625   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:21.489646   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:21.489657   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:21.489662   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:21.493197   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:21.988951   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:21.988974   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:21.988982   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:21.988986   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:21.992564   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:22.489223   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:22.489254   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:22.489262   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:22.489270   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:22.492275   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:22.989460   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:22.989483   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:22.989493   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:22.989502   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:22.992826   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:22.993315   33104 node_ready.go:53] node "ha-748477-m03" has status "Ready":"False"
	I0927 17:44:23.489736   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:23.489756   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.489764   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.489768   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.495068   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:23.989320   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:23.989345   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.989356   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.989363   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.992950   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:23.993381   33104 node_ready.go:49] node "ha-748477-m03" has status "Ready":"True"
	I0927 17:44:23.993400   33104 node_ready.go:38] duration metric: took 17.004633158s for node "ha-748477-m03" to be "Ready" ...
	I0927 17:44:23.993411   33104 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:44:23.993477   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:23.993489   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:23.993500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:23.993509   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:23.999279   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:24.006063   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.006162   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-n99lr
	I0927 17:44:24.006171   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.006185   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.006194   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.009676   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.010413   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.010431   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.010440   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.010444   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.013067   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.013609   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.013634   33104 pod_ready.go:82] duration metric: took 7.540949ms for pod "coredns-7c65d6cfc9-n99lr" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.013648   33104 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.013707   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-qvp2z
	I0927 17:44:24.013715   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.013723   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.013734   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.016476   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.017040   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.017054   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.017061   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.017064   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.019465   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.020063   33104 pod_ready.go:93] pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.020102   33104 pod_ready.go:82] duration metric: took 6.431397ms for pod "coredns-7c65d6cfc9-qvp2z" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.020111   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.020159   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477
	I0927 17:44:24.020167   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.020173   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.020177   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.022709   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.023386   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.023403   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.023413   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.023418   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.025863   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.026254   33104 pod_ready.go:93] pod "etcd-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.026275   33104 pod_ready.go:82] duration metric: took 6.154043ms for pod "etcd-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.026285   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.026339   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m02
	I0927 17:44:24.026349   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.026358   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.026367   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.028864   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.029549   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:24.029570   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.029581   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.029587   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.032020   33104 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0927 17:44:24.032371   33104 pod_ready.go:93] pod "etcd-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.032386   33104 pod_ready.go:82] duration metric: took 6.091988ms for pod "etcd-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.032394   33104 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.189823   33104 request.go:632] Waited for 157.37468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m03
	I0927 17:44:24.189892   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/etcd-ha-748477-m03
	I0927 17:44:24.189897   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.189904   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.189908   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.193136   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.390201   33104 request.go:632] Waited for 196.372402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:24.390286   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:24.390297   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.390308   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.390313   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.393762   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.394363   33104 pod_ready.go:93] pod "etcd-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.394381   33104 pod_ready.go:82] duration metric: took 361.981746ms for pod "etcd-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.394396   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.589922   33104 request.go:632] Waited for 195.447053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:44:24.589977   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477
	I0927 17:44:24.589984   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.589994   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.590003   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.595149   33104 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0927 17:44:24.790340   33104 request.go:632] Waited for 194.372172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.790393   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:24.790398   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.790405   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.790410   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.794157   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:24.794854   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:24.794872   33104 pod_ready.go:82] duration metric: took 400.469945ms for pod "kube-apiserver-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.794884   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:24.990005   33104 request.go:632] Waited for 195.038611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:44:24.990097   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m02
	I0927 17:44:24.990106   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:24.990114   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:24.990120   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:24.993651   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.189611   33104 request.go:632] Waited for 195.314442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:25.189675   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:25.189682   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.189692   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.189702   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.192900   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.193483   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.193499   33104 pod_ready.go:82] duration metric: took 398.608065ms for pod "kube-apiserver-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.193510   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.389697   33104 request.go:632] Waited for 196.11571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m03
	I0927 17:44:25.389767   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-748477-m03
	I0927 17:44:25.389774   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.389785   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.389793   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.393037   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.590215   33104 request.go:632] Waited for 196.404084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:25.590294   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:25.590304   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.590312   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.590316   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.593767   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.594384   33104 pod_ready.go:93] pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.594405   33104 pod_ready.go:82] duration metric: took 400.885974ms for pod "kube-apiserver-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.594417   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.789682   33104 request.go:632] Waited for 195.173744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:44:25.789750   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477
	I0927 17:44:25.789763   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.789771   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.789780   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.793195   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.990184   33104 request.go:632] Waited for 196.372393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:25.990247   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:25.990253   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:25.990260   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:25.990263   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:25.993519   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:25.994033   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:25.994056   33104 pod_ready.go:82] duration metric: took 399.631199ms for pod "kube-controller-manager-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:25.994070   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.190045   33104 request.go:632] Waited for 195.907906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:44:26.190131   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m02
	I0927 17:44:26.190138   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.190151   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.190160   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.193660   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.389361   33104 request.go:632] Waited for 195.017885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:26.389417   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:26.389421   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.389428   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.389431   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.392564   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.393105   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:26.393124   33104 pod_ready.go:82] duration metric: took 399.046825ms for pod "kube-controller-manager-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.393133   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.589483   33104 request.go:632] Waited for 196.270592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m03
	I0927 17:44:26.589536   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-748477-m03
	I0927 17:44:26.589540   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.589548   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.589552   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.592906   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.789895   33104 request.go:632] Waited for 196.382825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:26.789947   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:26.789952   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.789961   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.789964   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.793463   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:26.793873   33104 pod_ready.go:93] pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:26.793891   33104 pod_ready.go:82] duration metric: took 400.752393ms for pod "kube-controller-manager-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.793901   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:26.989945   33104 request.go:632] Waited for 195.982437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:44:26.990000   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kxwmh
	I0927 17:44:26.990005   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:26.990031   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:26.990035   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:26.993238   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.190379   33104 request.go:632] Waited for 196.39365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:27.190481   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:27.190488   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.190500   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.190506   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.194446   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.195047   33104 pod_ready.go:93] pod "kube-proxy-kxwmh" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.195067   33104 pod_ready.go:82] duration metric: took 401.160768ms for pod "kube-proxy-kxwmh" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.195076   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.390020   33104 request.go:632] Waited for 194.886629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:44:27.390100   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p76v9
	I0927 17:44:27.390108   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.390118   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.390144   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.393971   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.590100   33104 request.go:632] Waited for 195.421674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:27.590160   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:27.590166   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.590174   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.590180   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.593717   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.594167   33104 pod_ready.go:93] pod "kube-proxy-p76v9" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.594184   33104 pod_ready.go:82] duration metric: took 399.103012ms for pod "kube-proxy-p76v9" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.594193   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vwkqb" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.790210   33104 request.go:632] Waited for 195.943653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vwkqb
	I0927 17:44:27.790293   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vwkqb
	I0927 17:44:27.790300   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.790312   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.790320   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.793922   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.989848   33104 request.go:632] Waited for 194.791805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:27.989907   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:27.989914   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:27.989923   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:27.989939   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:27.993415   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:27.993925   33104 pod_ready.go:93] pod "kube-proxy-vwkqb" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:27.993944   33104 pod_ready.go:82] duration metric: took 399.743885ms for pod "kube-proxy-vwkqb" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:27.993955   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.190067   33104 request.go:632] Waited for 196.037102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:44:28.190120   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477
	I0927 17:44:28.190126   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.190133   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.190138   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.193549   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.389329   33104 request.go:632] Waited for 195.18973ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:28.389427   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477
	I0927 17:44:28.389436   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.389447   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.389459   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.392869   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.393523   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:28.393543   33104 pod_ready.go:82] duration metric: took 399.580493ms for pod "kube-scheduler-ha-748477" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.393553   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.589680   33104 request.go:632] Waited for 196.059502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:44:28.589758   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m02
	I0927 17:44:28.589766   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.589798   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.589812   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.593515   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.789392   33104 request.go:632] Waited for 195.298123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:28.789503   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m02
	I0927 17:44:28.789516   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.789528   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.789539   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.792681   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:28.793229   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:28.793254   33104 pod_ready.go:82] duration metric: took 399.693783ms for pod "kube-scheduler-ha-748477-m02" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.793277   33104 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:28.990199   33104 request.go:632] Waited for 196.858043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m03
	I0927 17:44:28.990266   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-748477-m03
	I0927 17:44:28.990272   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:28.990278   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:28.990283   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:28.993839   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.189981   33104 request.go:632] Waited for 195.403888ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:29.190077   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes/ha-748477-m03
	I0927 17:44:29.190088   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.190096   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.190103   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.193637   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.194214   33104 pod_ready.go:93] pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace has status "Ready":"True"
	I0927 17:44:29.194235   33104 pod_ready.go:82] duration metric: took 400.951036ms for pod "kube-scheduler-ha-748477-m03" in "kube-system" namespace to be "Ready" ...
	I0927 17:44:29.194250   33104 pod_ready.go:39] duration metric: took 5.200829097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 17:44:29.194265   33104 api_server.go:52] waiting for apiserver process to appear ...
	I0927 17:44:29.194320   33104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 17:44:29.209103   33104 api_server.go:72] duration metric: took 22.513227302s to wait for apiserver process to appear ...
	I0927 17:44:29.209147   33104 api_server.go:88] waiting for apiserver healthz status ...
	I0927 17:44:29.209171   33104 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0927 17:44:29.213508   33104 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0927 17:44:29.213572   33104 round_trippers.go:463] GET https://192.168.39.217:8443/version
	I0927 17:44:29.213579   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.213589   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.213599   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.214754   33104 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0927 17:44:29.214825   33104 api_server.go:141] control plane version: v1.31.1
	I0927 17:44:29.214842   33104 api_server.go:131] duration metric: took 5.68685ms to wait for apiserver health ...
	I0927 17:44:29.214854   33104 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 17:44:29.390318   33104 request.go:632] Waited for 175.371088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.390382   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.390388   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.390394   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.390400   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.396973   33104 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0927 17:44:29.403737   33104 system_pods.go:59] 24 kube-system pods found
	I0927 17:44:29.403771   33104 system_pods.go:61] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:44:29.403776   33104 system_pods.go:61] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:44:29.403780   33104 system_pods.go:61] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:44:29.403784   33104 system_pods.go:61] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:44:29.403787   33104 system_pods.go:61] "etcd-ha-748477-m03" [cd23c252-4153-4ed3-900a-ec3ec23a0b8a] Running
	I0927 17:44:29.403790   33104 system_pods.go:61] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:44:29.403794   33104 system_pods.go:61] "kindnet-66lb8" [613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba] Running
	I0927 17:44:29.403796   33104 system_pods.go:61] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:44:29.403800   33104 system_pods.go:61] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:44:29.403806   33104 system_pods.go:61] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:44:29.403810   33104 system_pods.go:61] "kube-apiserver-ha-748477-m03" [1ca56580-06a0-4c17-bfbf-fd04ca381250] Running
	I0927 17:44:29.403814   33104 system_pods.go:61] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:44:29.403818   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:44:29.403823   33104 system_pods.go:61] "kube-controller-manager-ha-748477-m03" [db69354c-c220-4f2a-b350-ce715009dfea] Running
	I0927 17:44:29.403827   33104 system_pods.go:61] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:44:29.403830   33104 system_pods.go:61] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:44:29.403833   33104 system_pods.go:61] "kube-proxy-vwkqb" [cee9a1cd-cce3-4e30-8bbe-1597f7ff4277] Running
	I0927 17:44:29.403836   33104 system_pods.go:61] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:44:29.403839   33104 system_pods.go:61] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:44:29.403841   33104 system_pods.go:61] "kube-scheduler-ha-748477-m03" [e9b04f8f-f820-455c-b70c-103a54bf9944] Running
	I0927 17:44:29.403845   33104 system_pods.go:61] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:44:29.403847   33104 system_pods.go:61] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:44:29.403851   33104 system_pods.go:61] "kube-vip-ha-748477-m03" [5f5c717e-5d86-4b0b-bd34-b4f8eb1f8eca] Running
	I0927 17:44:29.403853   33104 system_pods.go:61] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:44:29.403859   33104 system_pods.go:74] duration metric: took 188.99624ms to wait for pod list to return data ...
	I0927 17:44:29.403865   33104 default_sa.go:34] waiting for default service account to be created ...
	I0927 17:44:29.590098   33104 request.go:632] Waited for 186.16112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:44:29.590155   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/default/serviceaccounts
	I0927 17:44:29.590162   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.590171   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.590178   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.593809   33104 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0927 17:44:29.593933   33104 default_sa.go:45] found service account: "default"
	I0927 17:44:29.593953   33104 default_sa.go:55] duration metric: took 190.081669ms for default service account to be created ...
	I0927 17:44:29.593963   33104 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 17:44:29.790359   33104 request.go:632] Waited for 196.323191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.790417   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/namespaces/kube-system/pods
	I0927 17:44:29.790423   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.790430   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.790435   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.798546   33104 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0927 17:44:29.805235   33104 system_pods.go:86] 24 kube-system pods found
	I0927 17:44:29.805269   33104 system_pods.go:89] "coredns-7c65d6cfc9-n99lr" [ec2d5b00-2422-4e07-a352-a47254a81408] Running
	I0927 17:44:29.805277   33104 system_pods.go:89] "coredns-7c65d6cfc9-qvp2z" [61b875d4-dda7-465c-aff9-49e2eb8f5f9f] Running
	I0927 17:44:29.805283   33104 system_pods.go:89] "etcd-ha-748477" [5a3cd5ca-1fe0-45af-8ecb-ffe07554267f] Running
	I0927 17:44:29.805288   33104 system_pods.go:89] "etcd-ha-748477-m02" [98735bd7-e131-4183-90d0-fe9371351328] Running
	I0927 17:44:29.805293   33104 system_pods.go:89] "etcd-ha-748477-m03" [cd23c252-4153-4ed3-900a-ec3ec23a0b8a] Running
	I0927 17:44:29.805299   33104 system_pods.go:89] "kindnet-5wl4m" [fc7f8df5-02d8-4ad5-a8e8-127335b9d228] Running
	I0927 17:44:29.805304   33104 system_pods.go:89] "kindnet-66lb8" [613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba] Running
	I0927 17:44:29.805309   33104 system_pods.go:89] "kindnet-r9smp" [db4f8d38-452a-4db3-a9ac-e835aa9b6e74] Running
	I0927 17:44:29.805315   33104 system_pods.go:89] "kube-apiserver-ha-748477" [64d9bc75-0591-4f4f-9b3a-ae80f1c29758] Running
	I0927 17:44:29.805321   33104 system_pods.go:89] "kube-apiserver-ha-748477-m02" [f5bbd51c-d57a-4d88-9497-dfe96f7f32e8] Running
	I0927 17:44:29.805328   33104 system_pods.go:89] "kube-apiserver-ha-748477-m03" [1ca56580-06a0-4c17-bfbf-fd04ca381250] Running
	I0927 17:44:29.805337   33104 system_pods.go:89] "kube-controller-manager-ha-748477" [9e8a67a8-7d34-4863-a13b-090e2f76200f] Running
	I0927 17:44:29.805352   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m02" [c4652356-1abd-4a3c-8467-d0e4ce986de8] Running
	I0927 17:44:29.805358   33104 system_pods.go:89] "kube-controller-manager-ha-748477-m03" [db69354c-c220-4f2a-b350-ce715009dfea] Running
	I0927 17:44:29.805364   33104 system_pods.go:89] "kube-proxy-kxwmh" [ff85196c-19b2-41cc-a429-2f9a0d338e4f] Running
	I0927 17:44:29.805371   33104 system_pods.go:89] "kube-proxy-p76v9" [1ebfb1c9-64bb-47d1-962d-49573740e503] Running
	I0927 17:44:29.805379   33104 system_pods.go:89] "kube-proxy-vwkqb" [cee9a1cd-cce3-4e30-8bbe-1597f7ff4277] Running
	I0927 17:44:29.805386   33104 system_pods.go:89] "kube-scheduler-ha-748477" [4a15aad6-ad0a-4178-b4be-a8996e851be0] Running
	I0927 17:44:29.805394   33104 system_pods.go:89] "kube-scheduler-ha-748477-m02" [a5976eab-7801-48cb-a577-cf32978763da] Running
	I0927 17:44:29.805400   33104 system_pods.go:89] "kube-scheduler-ha-748477-m03" [e9b04f8f-f820-455c-b70c-103a54bf9944] Running
	I0927 17:44:29.805408   33104 system_pods.go:89] "kube-vip-ha-748477" [6851d789-cc8d-4ad0-8fe9-924d5d1d0ddf] Running
	I0927 17:44:29.805414   33104 system_pods.go:89] "kube-vip-ha-748477-m02" [562c181e-967c-4fe3-aa3b-11c478f38462] Running
	I0927 17:44:29.805421   33104 system_pods.go:89] "kube-vip-ha-748477-m03" [5f5c717e-5d86-4b0b-bd34-b4f8eb1f8eca] Running
	I0927 17:44:29.805427   33104 system_pods.go:89] "storage-provisioner" [8b5a708d-128c-492d-bff2-7efbfcc9396f] Running
	I0927 17:44:29.805437   33104 system_pods.go:126] duration metric: took 211.464032ms to wait for k8s-apps to be running ...
	I0927 17:44:29.805449   33104 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 17:44:29.805501   33104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 17:44:29.820712   33104 system_svc.go:56] duration metric: took 15.24207ms WaitForService to wait for kubelet
	I0927 17:44:29.820739   33104 kubeadm.go:582] duration metric: took 23.124868861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:44:29.820756   33104 node_conditions.go:102] verifying NodePressure condition ...
	I0927 17:44:29.990257   33104 request.go:632] Waited for 169.421001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.217:8443/api/v1/nodes
	I0927 17:44:29.990309   33104 round_trippers.go:463] GET https://192.168.39.217:8443/api/v1/nodes
	I0927 17:44:29.990315   33104 round_trippers.go:469] Request Headers:
	I0927 17:44:29.990322   33104 round_trippers.go:473]     Accept: application/json, */*
	I0927 17:44:29.990328   33104 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0927 17:44:29.994594   33104 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0927 17:44:29.995485   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995514   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995525   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995529   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995532   33104 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 17:44:29.995536   33104 node_conditions.go:123] node cpu capacity is 2
	I0927 17:44:29.995540   33104 node_conditions.go:105] duration metric: took 174.779797ms to run NodePressure ...
	I0927 17:44:29.995551   33104 start.go:241] waiting for startup goroutines ...
	I0927 17:44:29.995569   33104 start.go:255] writing updated cluster config ...
	I0927 17:44:29.995843   33104 ssh_runner.go:195] Run: rm -f paused
	I0927 17:44:30.046784   33104 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 17:44:30.049020   33104 out.go:177] * Done! kubectl is now configured to use "ha-748477" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.911651265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459302911630816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bc6821b-f38c-4a1e-8e84-feca1f557c55 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.912247134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfbdec08-3adf-478a-8a9c-8b022513a65f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.912301902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfbdec08-3adf-478a-8a9c-8b022513a65f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.912847431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfbdec08-3adf-478a-8a9c-8b022513a65f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.954133735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32d045c2-da2a-4a8b-a2ff-84a973c6eaaf name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.954257413Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32d045c2-da2a-4a8b-a2ff-84a973c6eaaf name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.956312744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=008b4ee0-6ec8-40f5-b898-e54c1a7e568f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.956753529Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459302956728393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=008b4ee0-6ec8-40f5-b898-e54c1a7e568f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.957338287Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=fda7bab4-7071-4b16-b3e3-c70071e4681b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.957629950Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-j7gsn,Uid:07233d33-34ed-44e8-a9d5-376e1860ca0c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727459071385161427,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:44:31.057407872Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8b5a708d-128c-492d-bff2-7efbfcc9396f,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1727458932902449667,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T17:42:12.573218348Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-qvp2z,Uid:61b875d4-dda7-465c-aff9-49e2eb8f5f9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458932879699150,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:12.569958449Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-n99lr,Uid:ec2d5b00-2422-4e07-a352-a47254a81408,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1727458932878513965,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:12.563003994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&PodSandboxMetadata{Name:kindnet-5wl4m,Uid:fc7f8df5-02d8-4ad5-a8e8-127335b9d228,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458920706274387,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:00.387399998Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&PodSandboxMetadata{Name:kube-proxy-p76v9,Uid:1ebfb1c9-64bb-47d1-962d-49573740e503,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458920672097797,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T17:42:00.357582877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-748477,Uid:b14aea5a97dfd5a2488f6e3ced308879,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1727458909026459903,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: b14aea5a97dfd5a2488f6e3ced308879,kubernetes.io/config.seen: 2024-09-27T17:41:48.537214929Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-748477,Uid:647e1f1a223aa05c0d6b5b0aa1c461da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458909007338821,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 647e1f1a223aa05c0d6b5b0aa1c461da,kubernetes.io/config.seen: 2024-09-27T17:41:48.537216051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-748477,Uid:6ca1e1a0b5ef88fb0f62da990054eb17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458909006052534,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{kubernetes.io/config.hash: 6ca1e1a0b5ef88fb0f62da990054eb17,kubernetes.io/config.seen: 2024-09-27T17:41:48.537217513Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f25008a681435c386989bc22da79780f9d2c52dfc
2ee4bd1d34f0366069ed9fe,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-748477,Uid:e6983c6d4e8a67eea6f4983292eca43a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458909005424738,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6983c6d4e8a67eea6f4983292eca43a,kubernetes.io/config.seen: 2024-09-27T17:41:48.537216911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&PodSandboxMetadata{Name:etcd-ha-748477,Uid:3ec1f007f86453df35a2f3141bc489b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1727458908993962377,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-748477,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: 3ec1f007f86453df35a2f3141bc489b3,kubernetes.io/config.seen: 2024-09-27T17:41:48.537210945Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fda7bab4-7071-4b16-b3e3-c70071e4681b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.958412741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab493cae-6b0c-4f97-95cc-ec2f6c0cbcf8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.958485005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab493cae-6b0c-4f97-95cc-ec2f6c0cbcf8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.958709994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab493cae-6b0c-4f97-95cc-ec2f6c0cbcf8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.959803030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=618d2abc-2f3f-42cd-82ea-4c4af8ff0b69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.959860775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=618d2abc-2f3f-42cd-82ea-4c4af8ff0b69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:22 ha-748477 crio[659]: time="2024-09-27 17:48:22.960079070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=618d2abc-2f3f-42cd-82ea-4c4af8ff0b69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.004243739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89895ab5-e802-467b-b87c-7cdd29e76130 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.004335506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89895ab5-e802-467b-b87c-7cdd29e76130 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.005784746Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50867e72-f98f-4a40-9c55-41fb12c7ac4e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.006273738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459303006248627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50867e72-f98f-4a40-9c55-41fb12c7ac4e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.006874044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da667244-ee15-4e87-bc06-2f55d9366ba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.006939023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da667244-ee15-4e87-bc06-2f55d9366ba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.007281231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459075502145430,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933151942873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727458933154238912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741,PodSandboxId:37067721a35735982a71027b76c8551834799f9c528ace42a59e2efa446d876c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1727458933106647634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17274589
21106246229,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727458920839506273,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7,PodSandboxId:48cfa3bbc5e9d1dc45fa6aad5a4e690ef4035398d0b2b89664e3e5f6dd413057,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727458912072281618,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ca1e1a0b5ef88fb0f62da990054eb17,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727458909257214024,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727458909294741596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36,PodSandboxId:9ace3b28f636eb5f3f117319fa69a16b0f2be5f7cce95b3c419497e43b3d0ca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727458909221443950,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf,PodSandboxId:9ca07019cd0cfbde2be078c2096d4870d37a623b5f3cadedfe61e7413d2fa03c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727458909169292011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da667244-ee15-4e87-bc06-2f55d9366ba2 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.015420103Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=d8467d1b-8609-4fe8-8538-373c79822d79 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:48:23 ha-748477 crio[659]: time="2024-09-27 17:48:23.015508416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8467d1b-8609-4fe8-8538-373c79822d79 name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82d138d00329a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   9af32827ca87e       busybox-7dff88458-j7gsn
	d07f02e11f879       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ce8d3fbc4ee43       coredns-7c65d6cfc9-qvp2z
	de0f399d2276a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   4c986f9d250c3       coredns-7c65d6cfc9-n99lr
	a7ccc536c4df9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   37067721a3573       storage-provisioner
	cd62df5a50cfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   61f84fe579fbd       kindnet-5wl4m
	42146256b0e01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   dc1e025d5f18b       kube-proxy-p76v9
	4caed5948aafe       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   48cfa3bbc5e9d       kube-vip-ha-748477
	d2acf98043067       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   f25008a681435       kube-scheduler-ha-748477
	72fe2a883c95c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   9199f6af07950       etcd-ha-748477
	c7ca45fc1dbb1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   9ace3b28f636e       kube-controller-manager-ha-748477
	657c5e75829c7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   9ca07019cd0cf       kube-apiserver-ha-748477
	
	
	==> coredns [d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777] <==
	[INFO] 10.244.0.4:55585 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166646s
	[INFO] 10.244.0.4:56311 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002436177s
	[INFO] 10.244.0.4:45590 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110873s
	[INFO] 10.244.2.2:43192 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152715s
	[INFO] 10.244.2.2:44388 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000177447s
	[INFO] 10.244.2.2:33554 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065853s
	[INFO] 10.244.2.2:58628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162914s
	[INFO] 10.244.1.2:38819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129715s
	[INFO] 10.244.1.2:60816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097737s
	[INFO] 10.244.1.2:36546 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014954s
	[INFO] 10.244.1.2:33829 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081077s
	[INFO] 10.244.1.2:59687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088947s
	[INFO] 10.244.0.4:40268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120362s
	[INFO] 10.244.0.4:38614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077477s
	[INFO] 10.244.0.4:40222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068679s
	[INFO] 10.244.2.2:51489 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133892s
	[INFO] 10.244.1.2:34773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000265454s
	[INFO] 10.244.0.4:56542 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227377s
	[INFO] 10.244.0.4:38585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133165s
	[INFO] 10.244.2.2:32823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133184s
	[INFO] 10.244.2.2:47801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112308s
	[INFO] 10.244.2.2:52586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146231s
	[INFO] 10.244.1.2:50376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194279s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116551s
	[INFO] 10.244.1.2:45074 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069954s
	
	
	==> coredns [de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa] <==
	[INFO] 10.244.2.2:47453 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000472755s
	[INFO] 10.244.1.2:51710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208951s
	[INFO] 10.244.1.2:47395 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000128476s
	[INFO] 10.244.1.2:39764 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001916816s
	[INFO] 10.244.0.4:60403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125998s
	[INFO] 10.244.0.4:36329 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177364s
	[INFO] 10.244.0.4:33684 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001089s
	[INFO] 10.244.2.2:47662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002007928s
	[INFO] 10.244.2.2:59058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158193s
	[INFO] 10.244.2.2:40790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001715411s
	[INFO] 10.244.2.2:48349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153048s
	[INFO] 10.244.1.2:55724 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002121618s
	[INFO] 10.244.1.2:41603 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096809s
	[INFO] 10.244.1.2:57083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631103s
	[INFO] 10.244.0.4:48117 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103399s
	[INFO] 10.244.2.2:56316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155752s
	[INFO] 10.244.2.2:36039 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172138s
	[INFO] 10.244.2.2:39197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113674s
	[INFO] 10.244.1.2:59834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130099s
	[INFO] 10.244.1.2:54472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087078s
	[INFO] 10.244.1.2:42463 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079936s
	[INFO] 10.244.0.4:58994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021944s
	[INFO] 10.244.0.4:50757 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135494s
	[INFO] 10.244.2.2:35416 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170114s
	[INFO] 10.244.1.2:50172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011348s
	
	
	==> describe nodes <==
	Name:               ha-748477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:41:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:44:59 +0000   Fri, 27 Sep 2024 17:42:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-748477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492d2104e50247c88ce564105fa6e436
	  System UUID:                492d2104-e502-47c8-8ce5-64105fa6e436
	  Boot ID:                    e44f404a-867d-4f4e-a185-458196aac718
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j7gsn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-7c65d6cfc9-n99lr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-qvp2z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-748477                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m28s
	  kube-system                 kindnet-5wl4m                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-748477             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-ha-748477    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-proxy-p76v9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-748477             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-748477                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m22s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-748477 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-748477 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-748477 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-748477 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal  RegisteredNode           4m12s  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	
	
	Name:               ha-748477-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:42:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:45:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 17:44:52 +0000   Fri, 27 Sep 2024 17:46:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    ha-748477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a797c0b98fa454a9290261a4120ee96
	  System UUID:                1a797c0b-98fa-454a-9290-261a4120ee96
	  Boot ID:                    be8b9b76-5b30-449e-8e6a-b392c8bc637d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xmqtg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-748477-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-r9smp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m33s
	  kube-system                 kube-apiserver-ha-748477-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-controller-manager-ha-748477-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-kxwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-scheduler-ha-748477-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-vip-ha-748477-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m33s (x8 over 5m34s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m33s (x8 over 5m34s)  kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m33s (x7 over 5m34s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-748477-m02 status is now: NodeNotReady
	
	
	Name:               ha-748477-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:44:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:45:04 +0000   Fri, 27 Sep 2024 17:44:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-748477-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f10cf0e49714a128d45f579afd701d8
	  System UUID:                7f10cf0e-4971-4a12-8d45-f579afd701d8
	  Boot ID:                    8028882c-9e9e-4142-9736-fa20678b0690
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8fcc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-748477-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-66lb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m21s
	  kube-system                 kube-apiserver-ha-748477-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-748477-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-vwkqb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-ha-748477-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-vip-ha-748477-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node ha-748477-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	
	
	Name:               ha-748477-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_45_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:45:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:48:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:45:39 +0000   Fri, 27 Sep 2024 17:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-748477-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53bc6a6bc9f74a04882f5b53ace38c50
	  System UUID:                53bc6a6b-c9f7-4a04-882f-5b53ace38c50
	  Boot ID:                    797c4344-bca4-4508-93c8-92db2f3a4663
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8kdps       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m15s
	  kube-system                 kube-proxy-t92jl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m15s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m15s)  kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m15s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-748477-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep27 17:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050897] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038191] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.766886] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.994968] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.572771] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.496309] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.056667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051200] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.195115] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.125330] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279617] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.856213] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.390156] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.062929] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.000255] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.085204] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 17:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.205900] kauditd_printk_skb: 38 callbacks suppressed
	[ +42.959337] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771] <==
	{"level":"warn","ts":"2024-09-27T17:48:23.131296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.197418Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.269344Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.275451Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.279210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.293590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.298245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.303822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.312908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.318043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.322256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.328673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.337410Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.345852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.350241Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.354755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.363147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.369436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.375467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.379500Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.382947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.386917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.393919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.397788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:48:23.402933Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:48:23 up 7 min,  0 users,  load average: 0.24, 0.30, 0.17
	Linux ha-748477 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f] <==
	I0927 17:47:52.271927       1 main.go:299] handling current node
	I0927 17:48:02.265005       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:48:02.265095       1 main.go:299] handling current node
	I0927 17:48:02.265110       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:48:02.265116       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:48:02.265396       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:48:02.265422       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:48:02.265476       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:48:02.265494       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:48:12.271840       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:48:12.271870       1 main.go:299] handling current node
	I0927 17:48:12.271884       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:48:12.271888       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:48:12.272009       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:48:12.272015       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:48:12.272064       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:48:12.272069       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:48:22.270903       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:48:22.270941       1 main.go:299] handling current node
	I0927 17:48:22.270955       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:48:22.270961       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:48:22.271077       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:48:22.271098       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:48:22.271162       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:48:22.271226       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf] <==
	W0927 17:41:54.285503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
	I0927 17:41:54.286484       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 17:41:54.291279       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 17:41:54.388865       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 17:41:55.517839       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 17:41:55.539342       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 17:41:55.549868       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 17:41:59.140843       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 17:42:00.286046       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 17:44:36.903808       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44866: use of closed network connection
	E0927 17:44:37.083629       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44890: use of closed network connection
	E0927 17:44:37.325665       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44898: use of closed network connection
	E0927 17:44:37.513055       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44922: use of closed network connection
	E0927 17:44:37.702332       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44948: use of closed network connection
	E0927 17:44:37.883878       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44974: use of closed network connection
	E0927 17:44:38.055802       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44990: use of closed network connection
	E0927 17:44:38.236694       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45008: use of closed network connection
	E0927 17:44:38.403967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45026: use of closed network connection
	E0927 17:44:38.704686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45048: use of closed network connection
	E0927 17:44:38.877491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45076: use of closed network connection
	E0927 17:44:39.052837       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45094: use of closed network connection
	E0927 17:44:39.232482       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45114: use of closed network connection
	E0927 17:44:39.403972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45138: use of closed network connection
	E0927 17:44:39.594519       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:45158: use of closed network connection
	W0927 17:46:04.298556       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.225]
	
	
	==> kube-controller-manager [c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36] <==
	I0927 17:45:08.716652       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-748477-m04\" does not exist"
	I0927 17:45:08.760763       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-748477-m04" podCIDRs=["10.244.3.0/24"]
	I0927 17:45:08.760823       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:08.760843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.011937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.385318       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:09.574027       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-748477-m04"
	I0927 17:45:09.640869       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:11.430286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:11.479780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:12.942848       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:12.962049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:18.969210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:29.722225       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:45:29.722369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:29.743285       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:31.451751       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:45:39.404025       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:46:24.602364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:46:24.602509       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:24.628682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:24.710382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.746809ms"
	I0927 17:46:24.710519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.102µs"
	I0927 17:46:26.579533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:46:29.873026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	
	
	==> kube-proxy [42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 17:42:01.081502       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 17:42:01.110880       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	E0927 17:42:01.111017       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:42:01.147630       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:42:01.147672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:42:01.147695       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:42:01.150196       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:42:01.150782       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:42:01.150809       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:42:01.154388       1 config.go:199] "Starting service config controller"
	I0927 17:42:01.154878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:42:01.155097       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:42:01.155116       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:42:01.157808       1 config.go:328] "Starting node config controller"
	I0927 17:42:01.157840       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 17:42:01.256235       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:42:01.256497       1 shared_informer.go:320] Caches are synced for service config
	I0927 17:42:01.258142       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756] <==
	E0927 17:44:02.933717       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-66lb8\": pod kindnet-66lb8 is already assigned to node \"ha-748477-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-66lb8" node="ha-748477-m03"
	E0927 17:44:02.934559       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 613bc6b2-b044-4e7a-a3be-8f1b9fa9c3ba(kube-system/kindnet-66lb8) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-66lb8"
	E0927 17:44:02.935616       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-66lb8\": pod kindnet-66lb8 is already assigned to node \"ha-748477-m03\"" pod="kube-system/kindnet-66lb8"
	I0927 17:44:02.935846       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-66lb8" node="ha-748477-m03"
	E0927 17:44:02.934408       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-vwkqb\": pod kube-proxy-vwkqb is already assigned to node \"ha-748477-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-vwkqb" node="ha-748477-m03"
	E0927 17:44:02.938352       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod cee9a1cd-cce3-4e30-8bbe-1597f7ff4277(kube-system/kube-proxy-vwkqb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-vwkqb"
	E0927 17:44:02.938437       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-vwkqb\": pod kube-proxy-vwkqb is already assigned to node \"ha-748477-m03\"" pod="kube-system/kube-proxy-vwkqb"
	I0927 17:44:02.938478       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-vwkqb" node="ha-748477-m03"
	E0927 17:44:31.066581       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j7gsn\": pod busybox-7dff88458-j7gsn is already assigned to node \"ha-748477\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-j7gsn" node="ha-748477"
	E0927 17:44:31.066642       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07233d33-34ed-44e8-a9d5-376e1860ca0c(default/busybox-7dff88458-j7gsn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-j7gsn"
	E0927 17:44:31.066658       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j7gsn\": pod busybox-7dff88458-j7gsn is already assigned to node \"ha-748477\"" pod="default/busybox-7dff88458-j7gsn"
	I0927 17:44:31.066676       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-j7gsn" node="ha-748477"
	E0927 17:44:31.089611       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xmqtg\": pod busybox-7dff88458-xmqtg is already assigned to node \"ha-748477-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xmqtg" node="ha-748477-m02"
	E0927 17:44:31.092159       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bd416f42-71bf-42f9-8e17-921e5b35333b(default/busybox-7dff88458-xmqtg) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-xmqtg"
	E0927 17:44:31.092486       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xmqtg\": pod busybox-7dff88458-xmqtg is already assigned to node \"ha-748477-m02\"" pod="default/busybox-7dff88458-xmqtg"
	I0927 17:44:31.092797       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xmqtg" node="ha-748477-m02"
	E0927 17:44:31.312466       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-tpc4p\" not found" pod="default/busybox-7dff88458-tpc4p"
	E0927 17:45:08.782464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.782636       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8041369a-60b6-46ac-ae40-2a232d799caf(kube-system/kindnet-gls7h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gls7h"
	E0927 17:45:08.782676       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" pod="kube-system/kindnet-gls7h"
	I0927 17:45:08.782749       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.783276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:45:08.785675       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fc28a65-d0e3-476e-bc9e-ff4e9f2e85ac(kube-system/kube-proxy-z2tnx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z2tnx"
	E0927 17:45:08.785786       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" pod="kube-system/kube-proxy-z2tnx"
	I0927 17:45:08.785868       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	
	
	==> kubelet <==
	Sep 27 17:46:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:46:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:46:55 ha-748477 kubelet[1304]: E0927 17:46:55.552924    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459215552461142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:46:55 ha-748477 kubelet[1304]: E0927 17:46:55.552961    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459215552461142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:05 ha-748477 kubelet[1304]: E0927 17:47:05.554669    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459225554270054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:05 ha-748477 kubelet[1304]: E0927 17:47:05.555306    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459225554270054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:15 ha-748477 kubelet[1304]: E0927 17:47:15.557097    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459235556635818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:15 ha-748477 kubelet[1304]: E0927 17:47:15.557135    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459235556635818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:25 ha-748477 kubelet[1304]: E0927 17:47:25.559322    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459245558659945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:25 ha-748477 kubelet[1304]: E0927 17:47:25.559377    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459245558659945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:35 ha-748477 kubelet[1304]: E0927 17:47:35.561127    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459255560855912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:35 ha-748477 kubelet[1304]: E0927 17:47:35.561197    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459255560855912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:45 ha-748477 kubelet[1304]: E0927 17:47:45.563216    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459265562750178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:45 ha-748477 kubelet[1304]: E0927 17:47:45.567283    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459265562750178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.507545    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:47:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:47:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.568682    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459275568451294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:47:55 ha-748477 kubelet[1304]: E0927 17:47:55.568704    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459275568451294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:05 ha-748477 kubelet[1304]: E0927 17:48:05.570034    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459285569687152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:05 ha-748477 kubelet[1304]: E0927 17:48:05.570079    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459285569687152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:15 ha-748477 kubelet[1304]: E0927 17:48:15.571710    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459295571258556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:48:15 ha-748477 kubelet[1304]: E0927 17:48:15.572095    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459295571258556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-748477 -n ha-748477
helpers_test.go:261: (dbg) Run:  kubectl --context ha-748477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-748477 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-748477 -v=7 --alsologtostderr
E0927 17:50:16.996970   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-748477 -v=7 --alsologtostderr: exit status 82 (2m1.877108613s)

                                                
                                                
-- stdout --
	* Stopping node "ha-748477-m04"  ...
	* Stopping node "ha-748477-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:48:28.544657   38242 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:48:28.544940   38242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:48:28.544950   38242 out.go:358] Setting ErrFile to fd 2...
	I0927 17:48:28.544954   38242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:48:28.545122   38242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:48:28.545332   38242 out.go:352] Setting JSON to false
	I0927 17:48:28.545416   38242 mustload.go:65] Loading cluster: ha-748477
	I0927 17:48:28.545811   38242 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:48:28.545894   38242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:48:28.546059   38242 mustload.go:65] Loading cluster: ha-748477
	I0927 17:48:28.546191   38242 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:48:28.546223   38242 stop.go:39] StopHost: ha-748477-m04
	I0927 17:48:28.546565   38242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:48:28.546612   38242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:48:28.562328   38242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41353
	I0927 17:48:28.562958   38242 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:48:28.563620   38242 main.go:141] libmachine: Using API Version  1
	I0927 17:48:28.563673   38242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:48:28.564000   38242 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:48:28.566506   38242 out.go:177] * Stopping node "ha-748477-m04"  ...
	I0927 17:48:28.568655   38242 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 17:48:28.568707   38242 main.go:141] libmachine: (ha-748477-m04) Calling .DriverName
	I0927 17:48:28.568933   38242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 17:48:28.568956   38242 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHHostname
	I0927 17:48:28.571734   38242 main.go:141] libmachine: (ha-748477-m04) DBG | domain ha-748477-m04 has defined MAC address 52:54:00:b6:6c:3f in network mk-ha-748477
	I0927 17:48:28.572161   38242 main.go:141] libmachine: (ha-748477-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:6c:3f", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:44:56 +0000 UTC Type:0 Mac:52:54:00:b6:6c:3f Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-748477-m04 Clientid:01:52:54:00:b6:6c:3f}
	I0927 17:48:28.572189   38242 main.go:141] libmachine: (ha-748477-m04) DBG | domain ha-748477-m04 has defined IP address 192.168.39.37 and MAC address 52:54:00:b6:6c:3f in network mk-ha-748477
	I0927 17:48:28.572351   38242 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHPort
	I0927 17:48:28.572523   38242 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHKeyPath
	I0927 17:48:28.572721   38242 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHUsername
	I0927 17:48:28.572911   38242 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m04/id_rsa Username:docker}
	I0927 17:48:28.660362   38242 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 17:48:28.714389   38242 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 17:48:28.770343   38242 main.go:141] libmachine: Stopping "ha-748477-m04"...
	I0927 17:48:28.770372   38242 main.go:141] libmachine: (ha-748477-m04) Calling .GetState
	I0927 17:48:28.772074   38242 main.go:141] libmachine: (ha-748477-m04) Calling .Stop
	I0927 17:48:28.776280   38242 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 0/120
	I0927 17:48:29.928599   38242 main.go:141] libmachine: (ha-748477-m04) Calling .GetState
	I0927 17:48:29.930021   38242 main.go:141] libmachine: Machine "ha-748477-m04" was stopped.
	I0927 17:48:29.930039   38242 stop.go:75] duration metric: took 1.361390265s to stop
	I0927 17:48:29.930082   38242 stop.go:39] StopHost: ha-748477-m03
	I0927 17:48:29.930385   38242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:48:29.930422   38242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:48:29.944999   38242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0927 17:48:29.945541   38242 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:48:29.946061   38242 main.go:141] libmachine: Using API Version  1
	I0927 17:48:29.946076   38242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:48:29.946417   38242 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:48:29.949335   38242 out.go:177] * Stopping node "ha-748477-m03"  ...
	I0927 17:48:29.950667   38242 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 17:48:29.950698   38242 main.go:141] libmachine: (ha-748477-m03) Calling .DriverName
	I0927 17:48:29.950943   38242 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 17:48:29.950969   38242 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHHostname
	I0927 17:48:29.954258   38242 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:48:29.954898   38242 main.go:141] libmachine: (ha-748477-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:59:33", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:43:29 +0000 UTC Type:0 Mac:52:54:00:bf:59:33 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-748477-m03 Clientid:01:52:54:00:bf:59:33}
	I0927 17:48:29.954947   38242 main.go:141] libmachine: (ha-748477-m03) DBG | domain ha-748477-m03 has defined IP address 192.168.39.225 and MAC address 52:54:00:bf:59:33 in network mk-ha-748477
	I0927 17:48:29.955136   38242 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHPort
	I0927 17:48:29.955308   38242 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHKeyPath
	I0927 17:48:29.955469   38242 main.go:141] libmachine: (ha-748477-m03) Calling .GetSSHUsername
	I0927 17:48:29.955571   38242 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m03/id_rsa Username:docker}
	I0927 17:48:30.048277   38242 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 17:48:30.104703   38242 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 17:48:30.161032   38242 main.go:141] libmachine: Stopping "ha-748477-m03"...
	I0927 17:48:30.161060   38242 main.go:141] libmachine: (ha-748477-m03) Calling .GetState
	I0927 17:48:30.162815   38242 main.go:141] libmachine: (ha-748477-m03) Calling .Stop
	I0927 17:48:30.166473   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 0/120
	I0927 17:48:31.167910   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 1/120
	I0927 17:48:32.169457   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 2/120
	I0927 17:48:33.171139   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 3/120
	I0927 17:48:34.173026   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 4/120
	I0927 17:48:35.175109   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 5/120
	I0927 17:48:36.176686   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 6/120
	I0927 17:48:37.178716   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 7/120
	I0927 17:48:38.180398   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 8/120
	I0927 17:48:39.182080   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 9/120
	I0927 17:48:40.184241   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 10/120
	I0927 17:48:41.186121   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 11/120
	I0927 17:48:42.187802   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 12/120
	I0927 17:48:43.189537   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 13/120
	I0927 17:48:44.191206   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 14/120
	I0927 17:48:45.193114   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 15/120
	I0927 17:48:46.194936   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 16/120
	I0927 17:48:47.196291   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 17/120
	I0927 17:48:48.197812   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 18/120
	I0927 17:48:49.199543   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 19/120
	I0927 17:48:50.201585   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 20/120
	I0927 17:48:51.203228   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 21/120
	I0927 17:48:52.204614   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 22/120
	I0927 17:48:53.206556   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 23/120
	I0927 17:48:54.207938   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 24/120
	I0927 17:48:55.210176   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 25/120
	I0927 17:48:56.211943   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 26/120
	I0927 17:48:57.213526   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 27/120
	I0927 17:48:58.215096   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 28/120
	I0927 17:48:59.216783   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 29/120
	I0927 17:49:00.218658   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 30/120
	I0927 17:49:01.220425   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 31/120
	I0927 17:49:02.222365   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 32/120
	I0927 17:49:03.223777   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 33/120
	I0927 17:49:04.225201   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 34/120
	I0927 17:49:05.226884   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 35/120
	I0927 17:49:06.228370   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 36/120
	I0927 17:49:07.229881   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 37/120
	I0927 17:49:08.231200   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 38/120
	I0927 17:49:09.233001   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 39/120
	I0927 17:49:10.234921   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 40/120
	I0927 17:49:11.236274   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 41/120
	I0927 17:49:12.238044   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 42/120
	I0927 17:49:13.239549   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 43/120
	I0927 17:49:14.241177   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 44/120
	I0927 17:49:15.243451   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 45/120
	I0927 17:49:16.245338   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 46/120
	I0927 17:49:17.247201   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 47/120
	I0927 17:49:18.249412   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 48/120
	I0927 17:49:19.251080   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 49/120
	I0927 17:49:20.253078   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 50/120
	I0927 17:49:21.254613   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 51/120
	I0927 17:49:22.256016   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 52/120
	I0927 17:49:23.257753   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 53/120
	I0927 17:49:24.259333   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 54/120
	I0927 17:49:25.261748   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 55/120
	I0927 17:49:26.263172   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 56/120
	I0927 17:49:27.265089   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 57/120
	I0927 17:49:28.266725   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 58/120
	I0927 17:49:29.268256   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 59/120
	I0927 17:49:30.270053   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 60/120
	I0927 17:49:31.271673   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 61/120
	I0927 17:49:32.273342   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 62/120
	I0927 17:49:33.275011   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 63/120
	I0927 17:49:34.276618   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 64/120
	I0927 17:49:35.278568   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 65/120
	I0927 17:49:36.280466   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 66/120
	I0927 17:49:37.282220   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 67/120
	I0927 17:49:38.284249   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 68/120
	I0927 17:49:39.285703   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 69/120
	I0927 17:49:40.287643   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 70/120
	I0927 17:49:41.289486   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 71/120
	I0927 17:49:42.291229   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 72/120
	I0927 17:49:43.292852   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 73/120
	I0927 17:49:44.294291   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 74/120
	I0927 17:49:45.296428   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 75/120
	I0927 17:49:46.297900   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 76/120
	I0927 17:49:47.299533   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 77/120
	I0927 17:49:48.301132   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 78/120
	I0927 17:49:49.302878   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 79/120
	I0927 17:49:50.304863   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 80/120
	I0927 17:49:51.306546   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 81/120
	I0927 17:49:52.308298   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 82/120
	I0927 17:49:53.309931   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 83/120
	I0927 17:49:54.311370   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 84/120
	I0927 17:49:55.313228   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 85/120
	I0927 17:49:56.314887   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 86/120
	I0927 17:49:57.316509   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 87/120
	I0927 17:49:58.317948   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 88/120
	I0927 17:49:59.319808   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 89/120
	I0927 17:50:00.321541   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 90/120
	I0927 17:50:01.323266   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 91/120
	I0927 17:50:02.324593   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 92/120
	I0927 17:50:03.325986   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 93/120
	I0927 17:50:04.327452   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 94/120
	I0927 17:50:05.329441   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 95/120
	I0927 17:50:06.331446   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 96/120
	I0927 17:50:07.333311   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 97/120
	I0927 17:50:08.335080   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 98/120
	I0927 17:50:09.337437   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 99/120
	I0927 17:50:10.338961   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 100/120
	I0927 17:50:11.341347   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 101/120
	I0927 17:50:12.342690   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 102/120
	I0927 17:50:13.344372   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 103/120
	I0927 17:50:14.345878   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 104/120
	I0927 17:50:15.347776   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 105/120
	I0927 17:50:16.348934   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 106/120
	I0927 17:50:17.350459   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 107/120
	I0927 17:50:18.351996   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 108/120
	I0927 17:50:19.353608   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 109/120
	I0927 17:50:20.355229   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 110/120
	I0927 17:50:21.357038   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 111/120
	I0927 17:50:22.358636   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 112/120
	I0927 17:50:23.360039   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 113/120
	I0927 17:50:24.361530   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 114/120
	I0927 17:50:25.363560   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 115/120
	I0927 17:50:26.365094   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 116/120
	I0927 17:50:27.366741   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 117/120
	I0927 17:50:28.368159   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 118/120
	I0927 17:50:29.369802   38242 main.go:141] libmachine: (ha-748477-m03) Waiting for machine to stop 119/120
	I0927 17:50:30.371313   38242 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 17:50:30.371396   38242 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 17:50:30.373449   38242 out.go:201] 
	W0927 17:50:30.375071   38242 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 17:50:30.375093   38242 out.go:270] * 
	* 
	W0927 17:50:30.377432   38242 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 17:50:30.378848   38242 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-748477 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-748477 --wait=true -v=7 --alsologtostderr
E0927 17:50:44.702569   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-748477 --wait=true -v=7 --alsologtostderr: (4m28.141563428s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-748477
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-748477 -n ha-748477
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 logs -n 25: (1.798429431s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m04 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp testdata/cp-test.txt                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m04_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03:/home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m03 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-748477 node stop m02 -v=7                                                     | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-748477 node start m02 -v=7                                                    | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-748477 -v=7                                                           | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-748477 -v=7                                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-748477 --wait=true -v=7                                                    | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:50 UTC | 27 Sep 24 17:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-748477                                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:54 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:50:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:50:30.424385   38757 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:50:30.424514   38757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:50:30.424523   38757 out.go:358] Setting ErrFile to fd 2...
	I0927 17:50:30.424527   38757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:50:30.425271   38757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:50:30.426831   38757 out.go:352] Setting JSON to false
	I0927 17:50:30.428150   38757 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5575,"bootTime":1727453855,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:50:30.428295   38757 start.go:139] virtualization: kvm guest
	I0927 17:50:30.430588   38757 out.go:177] * [ha-748477] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:50:30.432316   38757 notify.go:220] Checking for updates...
	I0927 17:50:30.432344   38757 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:50:30.434073   38757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:50:30.435876   38757 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:50:30.437587   38757 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:50:30.439384   38757 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:50:30.441049   38757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:50:30.443422   38757 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:50:30.443558   38757 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:50:30.444318   38757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:50:30.444365   38757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:50:30.460317   38757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0927 17:50:30.460923   38757 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:50:30.461624   38757 main.go:141] libmachine: Using API Version  1
	I0927 17:50:30.461658   38757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:50:30.462039   38757 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:50:30.462301   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:50:30.502146   38757 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 17:50:30.503553   38757 start.go:297] selected driver: kvm2
	I0927 17:50:30.503568   38757 start.go:901] validating driver "kvm2" against &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:50:30.503781   38757 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:50:30.504226   38757 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:50:30.504312   38757 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 17:50:30.520160   38757 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 17:50:30.520901   38757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:50:30.520937   38757 cni.go:84] Creating CNI manager for ""
	I0927 17:50:30.520989   38757 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 17:50:30.521054   38757 start.go:340] cluster config:
	{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:50:30.521197   38757 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:50:30.523468   38757 out.go:177] * Starting "ha-748477" primary control-plane node in "ha-748477" cluster
	I0927 17:50:30.524672   38757 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:50:30.524732   38757 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 17:50:30.524743   38757 cache.go:56] Caching tarball of preloaded images
	I0927 17:50:30.524850   38757 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:50:30.524863   38757 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:50:30.524985   38757 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:50:30.525190   38757 start.go:360] acquireMachinesLock for ha-748477: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:50:30.525232   38757 start.go:364] duration metric: took 23.245µs to acquireMachinesLock for "ha-748477"
	I0927 17:50:30.525273   38757 start.go:96] Skipping create...Using existing machine configuration
	I0927 17:50:30.525280   38757 fix.go:54] fixHost starting: 
	I0927 17:50:30.525533   38757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:50:30.525565   38757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:50:30.540401   38757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0927 17:50:30.540876   38757 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:50:30.541417   38757 main.go:141] libmachine: Using API Version  1
	I0927 17:50:30.541439   38757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:50:30.541816   38757 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:50:30.542007   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:50:30.542167   38757 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:50:30.544015   38757 fix.go:112] recreateIfNeeded on ha-748477: state=Running err=<nil>
	W0927 17:50:30.544047   38757 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 17:50:30.546174   38757 out.go:177] * Updating the running kvm2 "ha-748477" VM ...
	I0927 17:50:30.547622   38757 machine.go:93] provisionDockerMachine start ...
	I0927 17:50:30.547649   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:50:30.547909   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.550639   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.551156   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.551186   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.551332   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:30.551510   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.551672   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.551789   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:30.551946   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:30.552213   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:30.552226   38757 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 17:50:30.661058   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:50:30.661099   38757 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:50:30.661376   38757 buildroot.go:166] provisioning hostname "ha-748477"
	I0927 17:50:30.661401   38757 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:50:30.661591   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.664371   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.664860   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.664894   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.665117   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:30.665315   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.665502   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.665651   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:30.665840   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:30.666005   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:30.666020   38757 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477 && echo "ha-748477" | sudo tee /etc/hostname
	I0927 17:50:30.783842   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:50:30.783872   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.786699   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.787092   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.787122   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.787372   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:30.787572   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.787763   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.787886   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:30.788044   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:30.788237   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:30.788259   38757 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:50:30.896561   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:50:30.896591   38757 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:50:30.896615   38757 buildroot.go:174] setting up certificates
	I0927 17:50:30.896626   38757 provision.go:84] configureAuth start
	I0927 17:50:30.896634   38757 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:50:30.897036   38757 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:50:30.902127   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.902758   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.902782   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.903088   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.907150   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.907576   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.907601   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.908171   38757 provision.go:143] copyHostCerts
	I0927 17:50:30.908222   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:50:30.908256   38757 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:50:30.908275   38757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:50:30.908348   38757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:50:30.908484   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:50:30.908527   38757 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:50:30.908538   38757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:50:30.908585   38757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:50:30.908655   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:50:30.908672   38757 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:50:30.908678   38757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:50:30.908701   38757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:50:30.908778   38757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477 san=[127.0.0.1 192.168.39.217 ha-748477 localhost minikube]
	I0927 17:50:30.996703   38757 provision.go:177] copyRemoteCerts
	I0927 17:50:30.996774   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:50:30.996797   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.999703   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.000154   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:31.000186   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.000318   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:31.000502   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:31.000714   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:31.000921   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:50:31.081756   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:50:31.081829   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:50:31.108902   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:50:31.108996   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0927 17:50:31.135949   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:50:31.136043   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:50:31.162580   38757 provision.go:87] duration metric: took 265.939805ms to configureAuth
	I0927 17:50:31.162614   38757 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:50:31.162871   38757 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:50:31.162957   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:31.165683   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.166101   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:31.166143   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.166345   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:31.166557   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:31.166693   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:31.166826   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:31.167003   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:31.167172   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:31.167186   38757 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:52:02.027936   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:52:02.027984   38757 machine.go:96] duration metric: took 1m31.480344538s to provisionDockerMachine
	I0927 17:52:02.028004   38757 start.go:293] postStartSetup for "ha-748477" (driver="kvm2")
	I0927 17:52:02.028025   38757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:52:02.028054   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.028518   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:52:02.028557   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.031876   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.032328   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.032358   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.032553   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.032736   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.032888   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.033041   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:52:02.114186   38757 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:52:02.118480   38757 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:52:02.118519   38757 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:52:02.118592   38757 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:52:02.118700   38757 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:52:02.118714   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:52:02.118813   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:52:02.127965   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:52:02.153036   38757 start.go:296] duration metric: took 125.017384ms for postStartSetup
	I0927 17:52:02.153081   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.153424   38757 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0927 17:52:02.153453   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.156361   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.156926   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.156959   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.157179   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.157388   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.157730   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.157934   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	W0927 17:52:02.237106   38757 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0927 17:52:02.237146   38757 fix.go:56] duration metric: took 1m31.711865222s for fixHost
	I0927 17:52:02.237182   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.240043   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.240421   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.240447   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.240637   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.240852   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.241064   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.241228   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.241412   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:52:02.241580   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:52:02.241589   38757 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:52:02.339282   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727459522.305058331
	
	I0927 17:52:02.339315   38757 fix.go:216] guest clock: 1727459522.305058331
	I0927 17:52:02.339324   38757 fix.go:229] Guest: 2024-09-27 17:52:02.305058331 +0000 UTC Remote: 2024-09-27 17:52:02.237163091 +0000 UTC m=+91.848711143 (delta=67.89524ms)
	I0927 17:52:02.339381   38757 fix.go:200] guest clock delta is within tolerance: 67.89524ms
	I0927 17:52:02.339389   38757 start.go:83] releasing machines lock for "ha-748477", held for 1m31.814120266s
	I0927 17:52:02.339419   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.339685   38757 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:52:02.342515   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.342976   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.343013   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.343049   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.343661   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.343886   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.344001   38757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:52:02.344031   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.344089   38757 ssh_runner.go:195] Run: cat /version.json
	I0927 17:52:02.344112   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.346710   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347057   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347106   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.347131   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347266   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.347468   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.347614   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.347661   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.347683   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347775   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:52:02.347883   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.348055   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.348222   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.348354   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:52:02.463983   38757 ssh_runner.go:195] Run: systemctl --version
	I0927 17:52:02.470446   38757 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:52:02.632072   38757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:52:02.640064   38757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:52:02.640131   38757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:52:02.650297   38757 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 17:52:02.650321   38757 start.go:495] detecting cgroup driver to use...
	I0927 17:52:02.650387   38757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:52:02.667376   38757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:52:02.681617   38757 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:52:02.681684   38757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:52:02.695342   38757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:52:02.709156   38757 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:52:02.862957   38757 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:52:03.007205   38757 docker.go:233] disabling docker service ...
	I0927 17:52:03.007277   38757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:52:03.024936   38757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:52:03.038538   38757 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:52:03.188594   38757 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:52:03.339738   38757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:52:03.354004   38757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:52:03.373390   38757 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:52:03.373457   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.384341   38757 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:52:03.384421   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.395736   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.406771   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.417229   38757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:52:03.428906   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.441279   38757 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.452936   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.464225   38757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:52:03.474300   38757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:52:03.484185   38757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:52:03.635522   38757 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:52:03.893259   38757 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:52:03.893343   38757 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:52:03.898666   38757 start.go:563] Will wait 60s for crictl version
	I0927 17:52:03.898727   38757 ssh_runner.go:195] Run: which crictl
	I0927 17:52:03.902533   38757 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:52:03.939900   38757 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:52:03.939996   38757 ssh_runner.go:195] Run: crio --version
	I0927 17:52:03.969560   38757 ssh_runner.go:195] Run: crio --version
	I0927 17:52:04.002061   38757 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:52:04.003292   38757 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:52:04.005988   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:04.006474   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:04.006504   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:04.006716   38757 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:52:04.011889   38757 kubeadm.go:883] updating cluster {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 17:52:04.012055   38757 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:52:04.012107   38757 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:52:04.056973   38757 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:52:04.056996   38757 crio.go:433] Images already preloaded, skipping extraction
	I0927 17:52:04.057042   38757 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:52:04.092033   38757 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:52:04.092064   38757 cache_images.go:84] Images are preloaded, skipping loading
	I0927 17:52:04.092076   38757 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.1 crio true true} ...
	I0927 17:52:04.092229   38757 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:52:04.092322   38757 ssh_runner.go:195] Run: crio config
	I0927 17:52:04.145573   38757 cni.go:84] Creating CNI manager for ""
	I0927 17:52:04.145603   38757 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 17:52:04.145612   38757 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 17:52:04.145638   38757 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-748477 NodeName:ha-748477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 17:52:04.145779   38757 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-748477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 17:52:04.145807   38757 kube-vip.go:115] generating kube-vip config ...
	I0927 17:52:04.145847   38757 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:52:04.157586   38757 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:52:04.157734   38757 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:52:04.157802   38757 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:52:04.168157   38757 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 17:52:04.168219   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 17:52:04.178726   38757 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0927 17:52:04.195170   38757 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:52:04.213194   38757 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 17:52:04.231689   38757 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:52:04.250518   38757 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:52:04.255638   38757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:52:04.399700   38757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:52:04.414795   38757 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.217
	I0927 17:52:04.414817   38757 certs.go:194] generating shared ca certs ...
	I0927 17:52:04.414840   38757 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:52:04.415014   38757 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:52:04.415056   38757 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:52:04.415063   38757 certs.go:256] generating profile certs ...
	I0927 17:52:04.415130   38757 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:52:04.415155   38757 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601
	I0927 17:52:04.415175   38757 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.225 192.168.39.254]
	I0927 17:52:04.603809   38757 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601 ...
	I0927 17:52:04.603848   38757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601: {Name:mk1174f2e9d4ef80315691684af9396502bb75fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:52:04.604016   38757 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601 ...
	I0927 17:52:04.604030   38757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601: {Name:mkd8a32d0d2e01a5028c1808f38e911c66423418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:52:04.604101   38757 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:52:04.604267   38757 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:52:04.604397   38757 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:52:04.604411   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:52:04.604424   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:52:04.604435   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:52:04.604447   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:52:04.604457   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:52:04.604466   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:52:04.604483   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:52:04.604492   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:52:04.604537   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:52:04.604562   38757 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:52:04.604569   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:52:04.604597   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:52:04.604624   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:52:04.604645   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:52:04.604681   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:52:04.604705   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.604728   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.604745   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.605392   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:52:04.631025   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:52:04.657046   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:52:04.680727   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:52:04.704625   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 17:52:04.728489   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 17:52:04.752645   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:52:04.777694   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:52:04.801729   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:52:04.825565   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:52:04.849024   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:52:04.873850   38757 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 17:52:04.891129   38757 ssh_runner.go:195] Run: openssl version
	I0927 17:52:04.897311   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:52:04.909302   38757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.913855   38757 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.913912   38757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.919615   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:52:04.929082   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:52:04.940922   38757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.945226   38757 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.945284   38757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.950859   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:52:04.960123   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:52:04.970748   38757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.975025   38757 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.975086   38757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.980279   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:52:04.989244   38757 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:52:04.993624   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 17:52:04.999189   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 17:52:05.005061   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 17:52:05.010556   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 17:52:05.016104   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 17:52:05.021587   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 17:52:05.027294   38757 kubeadm.go:392] StartCluster: {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:52:05.027474   38757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 17:52:05.027566   38757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 17:52:05.064404   38757 cri.go:89] found id: "1b9410286a4cec350755db66e63c86ea609da094bebc93494e31b00cd3561840"
	I0927 17:52:05.064430   38757 cri.go:89] found id: "04ef7eba61dfa4987959a431a6b525f4dc245bdc9ac5a306d7b94035c30a845d"
	I0927 17:52:05.064435   38757 cri.go:89] found id: "16a2ebbf8d55df913983c5d061e2cfdd9a1294deb31db244d2c431dcc794336f"
	I0927 17:52:05.064440   38757 cri.go:89] found id: "d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777"
	I0927 17:52:05.064443   38757 cri.go:89] found id: "de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa"
	I0927 17:52:05.064447   38757 cri.go:89] found id: "a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741"
	I0927 17:52:05.064451   38757 cri.go:89] found id: "cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f"
	I0927 17:52:05.064455   38757 cri.go:89] found id: "42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b"
	I0927 17:52:05.064459   38757 cri.go:89] found id: "4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7"
	I0927 17:52:05.064467   38757 cri.go:89] found id: "d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756"
	I0927 17:52:05.064485   38757 cri.go:89] found id: "72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771"
	I0927 17:52:05.064504   38757 cri.go:89] found id: "c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36"
	I0927 17:52:05.064509   38757 cri.go:89] found id: "657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf"
	I0927 17:52:05.064514   38757 cri.go:89] found id: ""
	I0927 17:52:05.064568   38757 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.245021889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459699244983380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4887a537-077f-48fc-8dfe-7088ca3f4931 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.245556267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6fe85fa-39d9-4af4-8f3c-449593a6d373 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.245640139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6fe85fa-39d9-4af4-8f3c-449593a6d373 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.246074689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6fe85fa-39d9-4af4-8f3c-449593a6d373 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.295418798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca005b01-914d-4949-8a48-ce2d31f7d7a6 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.295490086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca005b01-914d-4949-8a48-ce2d31f7d7a6 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.296678431Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=006fb37a-bb21-41a7-b8b5-78930d938838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.297143785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459699297112337,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=006fb37a-bb21-41a7-b8b5-78930d938838 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.297847500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4a5b219-482e-4b60-aead-e9281e106bda name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.297902247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4a5b219-482e-4b60-aead-e9281e106bda name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.298455542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4a5b219-482e-4b60-aead-e9281e106bda name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.341853414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f2aa3edf-abec-495c-a965-e4d2517cfa53 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.341988441Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f2aa3edf-abec-495c-a965-e4d2517cfa53 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.343465761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9053036-c787-4dca-8e30-015548840089 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.343925890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459699343898719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9053036-c787-4dca-8e30-015548840089 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.344632816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b5f8805-917a-470a-a2aa-97de6ed749fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.344690608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b5f8805-917a-470a-a2aa-97de6ed749fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.345116500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0b5f8805-917a-470a-a2aa-97de6ed749fd name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.388974158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8884b50-5dfb-4cc6-976e-cc4e94f0cad6 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.389105547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8884b50-5dfb-4cc6-976e-cc4e94f0cad6 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.390592244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55d48912-6f78-4219-9749-bc5857caad04 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.391738487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459699391692888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55d48912-6f78-4219-9749-bc5857caad04 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.392530843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73bac327-4829-4a06-8c2e-b0cbee435e15 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.392606436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73bac327-4829-4a06-8c2e-b0cbee435e15 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:54:59 ha-748477 crio[3603]: time="2024-09-27 17:54:59.393286065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73bac327-4829-4a06-8c2e-b0cbee435e15 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2d73744d0b9ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   925e4ebbd3a1c       storage-provisioner
	608b8c4779818       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   3                   4b448aa75cf9e       kube-controller-manager-ha-748477
	77522b0e7a0f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            3                   153c492fceb24       kube-apiserver-ha-748477
	32ada22da1620       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   2                   4b448aa75cf9e       kube-controller-manager-ha-748477
	6f9d172c11627       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   925e4ebbd3a1c       storage-provisioner
	8b4aceb6e02c8       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   9fd92cb2c074a       busybox-7dff88458-j7gsn
	77106038b90e8       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   17d84e5316278       kube-vip-ha-748477
	2fb8d4ad3bbe9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   c5e973435243f       coredns-7c65d6cfc9-qvp2z
	eaac309de683f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   fd6322271998c       kindnet-5wl4m
	1c79692edbb51       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   112aab9f65c43       coredns-7c65d6cfc9-n99lr
	36a07f77582d1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   153c492fceb24       kube-apiserver-ha-748477
	12d02855eee03       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   009f57477683a       kube-proxy-p76v9
	a286c5b0e6086       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   530b499e046b2       etcd-ha-748477
	8603d2b3b9d65       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   a75da9329992e       kube-scheduler-ha-748477
	82d138d00329a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   9af32827ca87e       busybox-7dff88458-j7gsn
	d07f02e11f879       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   ce8d3fbc4ee43       coredns-7c65d6cfc9-qvp2z
	de0f399d2276a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   4c986f9d250c3       coredns-7c65d6cfc9-n99lr
	cd62df5a50cfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   61f84fe579fbd       kindnet-5wl4m
	42146256b0e01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   dc1e025d5f18b       kube-proxy-p76v9
	d2acf98043067       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago       Exited              kube-scheduler            0                   f25008a681435       kube-scheduler-ha-748477
	72fe2a883c95c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   9199f6af07950       etcd-ha-748477
	
	
	==> coredns [1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a] <==
	Trace[298206810]: [10.542795397s] [10.542795397s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37078->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[833858064]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 17:52:20.600) (total time: 13131ms):
	Trace[833858064]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37076->10.96.0.1:443: read: connection reset by peer 13131ms (17:52:33.732)
	Trace[833858064]: [13.131845291s] [13.131845291s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2fb8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2] <==
	Trace[1748738603]: [10.001554689s] [10.001554689s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1208010800]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 17:52:17.163) (total time: 10001ms):
	Trace[1208010800]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:52:27.164)
	Trace[1208010800]: [10.001417426s] [10.001417426s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52794->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52794->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52788->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52788->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777] <==
	[INFO] 10.244.2.2:33554 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065853s
	[INFO] 10.244.2.2:58628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162914s
	[INFO] 10.244.1.2:38819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129715s
	[INFO] 10.244.1.2:60816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097737s
	[INFO] 10.244.1.2:36546 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014954s
	[INFO] 10.244.1.2:33829 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081077s
	[INFO] 10.244.1.2:59687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088947s
	[INFO] 10.244.0.4:40268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120362s
	[INFO] 10.244.0.4:38614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077477s
	[INFO] 10.244.0.4:40222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068679s
	[INFO] 10.244.2.2:51489 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133892s
	[INFO] 10.244.1.2:34773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000265454s
	[INFO] 10.244.0.4:56542 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227377s
	[INFO] 10.244.0.4:38585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133165s
	[INFO] 10.244.2.2:32823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133184s
	[INFO] 10.244.2.2:47801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112308s
	[INFO] 10.244.2.2:52586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146231s
	[INFO] 10.244.1.2:50376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194279s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116551s
	[INFO] 10.244.1.2:45074 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069954s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa] <==
	[INFO] 10.244.0.4:36329 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177364s
	[INFO] 10.244.0.4:33684 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001089s
	[INFO] 10.244.2.2:47662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002007928s
	[INFO] 10.244.2.2:59058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158193s
	[INFO] 10.244.2.2:40790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001715411s
	[INFO] 10.244.2.2:48349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153048s
	[INFO] 10.244.1.2:55724 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002121618s
	[INFO] 10.244.1.2:41603 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096809s
	[INFO] 10.244.1.2:57083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631103s
	[INFO] 10.244.0.4:48117 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103399s
	[INFO] 10.244.2.2:56316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155752s
	[INFO] 10.244.2.2:36039 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172138s
	[INFO] 10.244.2.2:39197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113674s
	[INFO] 10.244.1.2:59834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130099s
	[INFO] 10.244.1.2:54472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087078s
	[INFO] 10.244.1.2:42463 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079936s
	[INFO] 10.244.0.4:58994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021944s
	[INFO] 10.244.0.4:50757 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135494s
	[INFO] 10.244.2.2:35416 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170114s
	[INFO] 10.244.1.2:50172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011348s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-748477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:41:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:54:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:42:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-748477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492d2104e50247c88ce564105fa6e436
	  System UUID:                492d2104-e502-47c8-8ce5-64105fa6e436
	  Boot ID:                    e44f404a-867d-4f4e-a185-458196aac718
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j7gsn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-n99lr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-qvp2z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-748477                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-5wl4m                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-748477             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-748477    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-p76v9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-748477             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-748477                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m8s                   kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-748477 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-748477 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-748477 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-748477 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   NodeNotReady             3m15s (x2 over 3m40s)  kubelet          Node ha-748477 status is now: NodeNotReady
	  Warning  ContainerGCFailed        3m4s (x2 over 4m4s)    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	
	
	Name:               ha-748477-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:42:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:54:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    ha-748477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a797c0b98fa454a9290261a4120ee96
	  System UUID:                1a797c0b-98fa-454a-9290-261a4120ee96
	  Boot ID:                    34503aed-ddd2-4580-b284-b4db7673b25e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xmqtg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-748477-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-r9smp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-748477-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-748477-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kxwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-748477-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-748477-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  NodeNotReady             8m35s                  node-controller  Node ha-748477-m02 status is now: NodeNotReady
	  Normal  Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s (x7 over 2m31s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m7s                   node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           100s                   node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           39s                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	
	
	Name:               ha-748477-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_44_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:44:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:54:35 +0000   Fri, 27 Sep 2024 17:54:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:54:35 +0000   Fri, 27 Sep 2024 17:54:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:54:35 +0000   Fri, 27 Sep 2024 17:54:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:54:35 +0000   Fri, 27 Sep 2024 17:54:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-748477-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f10cf0e49714a128d45f579afd701d8
	  System UUID:                7f10cf0e-4971-4a12-8d45-f579afd701d8
	  Boot ID:                    3470ffd2-38b8-4206-9bc1-af77f178a961
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-p8fcc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-748477-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-66lb8                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-748477-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-748477-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-vwkqb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-748477-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-748477-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 38s                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-748477-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal   RegisteredNode           2m7s               node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal   RegisteredNode           100s               node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	  Normal   NodeNotReady             87s                node-controller  Node ha-748477-m03 status is now: NodeNotReady
	  Normal   Starting                 54s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  54s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  54s (x2 over 54s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    54s (x2 over 54s)  kubelet          Node ha-748477-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     54s (x2 over 54s)  kubelet          Node ha-748477-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 54s                kubelet          Node ha-748477-m03 has been rebooted, boot id: 3470ffd2-38b8-4206-9bc1-af77f178a961
	  Normal   NodeReady                54s                kubelet          Node ha-748477-m03 status is now: NodeReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-748477-m03 event: Registered Node ha-748477-m03 in Controller
	
	
	Name:               ha-748477-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_45_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:45:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:54:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:54:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:54:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:54:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:54:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-748477-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53bc6a6bc9f74a04882f5b53ace38c50
	  System UUID:                53bc6a6b-c9f7-4a04-882f-5b53ace38c50
	  Boot ID:                    73e1f0a4-9f56-44d2-8d04-b202848d2d56
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8kdps       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m51s
	  kube-system                 kube-proxy-t92jl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m51s (x2 over 9m51s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m51s (x2 over 9m51s)  kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m51s (x2 over 9m51s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m50s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           9m48s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           9m47s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   NodeReady                9m30s                  kubelet          Node ha-748477-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           100s                   node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   NodeNotReady             87s                    node-controller  Node ha-748477-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                    node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                     kubelet          Node ha-748477-m04 has been rebooted, boot id: 73e1f0a4-9f56-44d2-8d04-b202848d2d56
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)        kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)        kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)        kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                     kubelet          Node ha-748477-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[ +12.496309] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.056667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051200] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.195115] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.125330] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279617] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.856213] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.390156] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.062929] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.000255] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.085204] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 17:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.205900] kauditd_printk_skb: 38 callbacks suppressed
	[ +42.959337] kauditd_printk_skb: 26 callbacks suppressed
	[Sep27 17:52] systemd-fstab-generator[3513]: Ignoring "noauto" option for root device
	[  +0.147761] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.183677] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.144373] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.298680] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +0.767359] systemd-fstab-generator[3688]: Ignoring "noauto" option for root device
	[  +3.619289] kauditd_printk_skb: 122 callbacks suppressed
	[  +7.958160] kauditd_printk_skb: 85 callbacks suppressed
	[ +15.085845] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.822614] kauditd_printk_skb: 10 callbacks suppressed
	[Sep27 17:53] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771] <==
	{"level":"warn","ts":"2024-09-27T17:50:31.296073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T17:50:23.980488Z","time spent":"7.315572432s","remote":"127.0.0.1:37338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	2024/09/27 17:50:31 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/27 17:50:31 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T17:50:31.552312Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437254086752604898,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-27T17:50:31.571817Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T17:50:31.571883Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T17:50:31.573825Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T17:50:31.574056Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574104Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574195Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574305Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574401Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574483Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574523Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574532Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574542Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574584Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574692Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574753Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574818Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574851Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.577772Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-09-27T17:50:31.577872Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-09-27T17:50:31.577894Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-748477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-09-27T17:50:31.577879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.021483533s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398] <==
	{"level":"warn","ts":"2024-09-27T17:53:59.663027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:53:59.684237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:53:59.685930Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:53:59.687003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"a09c9983ac28f1fd","from":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-27T17:54:00.111028Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.225:2380/version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:00.111274Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:04.113385Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.225:2380/version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:04.113427Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:04.636259Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"90dcf8742efcd955","rtt":"0s","error":"dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:04.637448Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"90dcf8742efcd955","rtt":"0s","error":"dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:08.116029Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.225:2380/version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:08.116095Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:09.637031Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"90dcf8742efcd955","rtt":"0s","error":"dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:09.638375Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"90dcf8742efcd955","rtt":"0s","error":"dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:11.895996Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.886022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-748477-m03\" ","response":"range_response_count:1 size:5950"}
	{"level":"info","ts":"2024-09-27T17:54:11.896303Z","caller":"traceutil/trace.go:171","msg":"trace[1498784703] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-748477-m03; range_end:; response_count:1; response_revision:2311; }","duration":"154.103345ms","start":"2024-09-27T17:54:11.742024Z","end":"2024-09-27T17:54:11.896128Z","steps":["trace[1498784703] 'range keys from in-memory index tree'  (duration: 152.73752ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T17:54:12.118442Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.225:2380/version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-27T17:54:12.118574Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"90dcf8742efcd955","error":"Get \"https://192.168.39.225:2380/version\": dial tcp 192.168.39.225:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-27T17:54:12.453284Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"90dcf8742efcd955","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-27T17:54:12.453370Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.453399Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.468287Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"90dcf8742efcd955","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-27T17:54:12.468337Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.471665Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.472961Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	
	
	==> kernel <==
	 17:55:00 up 13 min,  0 users,  load average: 0.87, 0.55, 0.33
	Linux ha-748477 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f] <==
	I0927 17:50:02.264819       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:50:02.264850       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:50:02.265025       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:50:02.265085       1 main.go:299] handling current node
	I0927 17:50:02.265097       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:50:02.265102       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:50:02.265156       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:50:02.265160       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:50:12.266791       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:50:12.266862       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:50:12.266979       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:50:12.266999       1 main.go:299] handling current node
	I0927 17:50:12.267011       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:50:12.267016       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:50:12.267058       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:50:12.267072       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:50:22.265686       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:50:22.265818       1 main.go:299] handling current node
	I0927 17:50:22.265856       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:50:22.265878       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:50:22.266053       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:50:22.266115       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:50:22.266299       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:50:22.266341       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	E0927 17:50:29.560841       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598] <==
	I0927 17:54:29.748628       1 main.go:299] handling current node
	I0927 17:54:39.742276       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:54:39.742315       1 main.go:299] handling current node
	I0927 17:54:39.742330       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:54:39.742335       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:54:39.742576       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:54:39.742633       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:54:39.742758       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:54:39.742788       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:54:49.747971       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:54:49.748106       1 main.go:299] handling current node
	I0927 17:54:49.748148       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:54:49.748250       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:54:49.748458       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:54:49.748505       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:54:49.748609       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:54:49.748651       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:54:59.745417       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:54:59.745459       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:54:59.745583       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:54:59.745590       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:54:59.745661       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:54:59.745666       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:54:59.745709       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:54:59.745719       1 main.go:299] handling current node
	
	
	==> kube-apiserver [36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510] <==
	I0927 17:52:09.087535       1 options.go:228] external host was not specified, using 192.168.39.217
	I0927 17:52:09.090154       1 server.go:142] Version: v1.31.1
	I0927 17:52:09.090347       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:52:10.131705       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 17:52:10.138493       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 17:52:10.142445       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 17:52:10.142603       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 17:52:10.142863       1 instance.go:232] Using reconciler: lease
	W0927 17:52:30.128951       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 17:52:30.129090       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 17:52:30.144579       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0927 17:52:30.144578       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3] <==
	I0927 17:52:55.327767       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0927 17:52:55.405819       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 17:52:55.405850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 17:52:55.407609       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 17:52:55.408052       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 17:52:55.408210       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 17:52:55.409552       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 17:52:55.416844       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 17:52:55.419990       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 17:52:55.427890       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 17:52:55.428102       1 aggregator.go:171] initial CRD sync complete...
	I0927 17:52:55.428254       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 17:52:55.428335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 17:52:55.428369       1 cache.go:39] Caches are synced for autoregister controller
	I0927 17:52:55.429650       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 17:52:55.429725       1 policy_source.go:224] refreshing policies
	I0927 17:52:55.439291       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 17:52:55.443307       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0927 17:52:55.609339       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.225 192.168.39.58]
	I0927 17:52:55.610853       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 17:52:55.620320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 17:52:55.623988       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 17:52:56.314580       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 17:52:56.853655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.225 192.168.39.58]
	W0927 17:53:06.845790       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.58]
	
	
	==> kube-controller-manager [32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46] <==
	I0927 17:52:45.876361       1 serving.go:386] Generated self-signed cert in-memory
	I0927 17:52:46.193542       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 17:52:46.193646       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:52:46.195245       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 17:52:46.195372       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 17:52:46.195736       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 17:52:46.195833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 17:52:56.201162       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500] <==
	I0927 17:53:32.731030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:53:32.739018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:53:32.759493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:53:32.784416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:53:32.921608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="23.294922ms"
	I0927 17:53:32.922518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="168.29µs"
	I0927 17:53:34.804529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:53:36.496810       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:53:38.009136       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:53:44.891105       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:53:48.095461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:54:05.468595       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:54:05.493743       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:54:06.423606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="132.389µs"
	I0927 17:54:07.146551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m02"
	I0927 17:54:07.988689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:54:20.437865       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:54:20.524776       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:54:25.696133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.484842ms"
	I0927 17:54:25.696534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.568µs"
	I0927 17:54:35.927942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m03"
	I0927 17:54:51.886942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:54:51.888114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-748477-m04"
	I0927 17:54:51.904836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:54:53.017012       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	
	
	==> kube-proxy [12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010] <==
	E0927 17:52:51.139801       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-748477\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0927 17:52:51.139861       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0927 17:52:51.139917       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:52:51.171944       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:52:51.172005       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:52:51.172029       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:52:51.174409       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:52:51.174761       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:52:51.174786       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:52:51.176265       1 config.go:199] "Starting service config controller"
	I0927 17:52:51.176323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:52:51.176416       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:52:51.176437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:52:51.177241       1 config.go:328] "Starting node config controller"
	I0927 17:52:51.177268       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0927 17:52:54.211906       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0927 17:52:54.212285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:52:54.212476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:52:54.212615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:52:54.212709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:52:54.212857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:52:54.214275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0927 17:52:56.577461       1 shared_informer.go:320] Caches are synced for node config
	I0927 17:52:56.577546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:52:56.577531       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b] <==
	E0927 17:49:22.244787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:25.315993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:25.316592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:25.316234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:25.316842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:28.390113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:28.390289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:31.461435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:31.461735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:34.532914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:34.533019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:34.533107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:34.533147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:43.748407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:43.748474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:46.821366       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:46.821438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:46.821766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:46.821884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:50:02.180570       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:50:02.180846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:50:05.252008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:50:05.252089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:50:14.468960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:50:14.469157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01] <==
	W0927 17:52:48.377123       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:48.377328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.39.217:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:48.878921       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:48.879004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.217:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:48.886041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:48.886095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:48.916020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:48.916072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:49.167848       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:49.167917       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.217:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:49.759298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:49.759354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:49.842031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:49.842089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.217:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:50.510748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:50.511244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:51.721140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:51.721347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:52.383560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:52.383641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:52.683118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:52.683387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:52.882319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.217:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:52.882430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.217:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	I0927 17:53:05.961286       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756] <==
	E0927 17:44:31.312466       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-tpc4p\" not found" pod="default/busybox-7dff88458-tpc4p"
	E0927 17:45:08.782464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.782636       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8041369a-60b6-46ac-ae40-2a232d799caf(kube-system/kindnet-gls7h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gls7h"
	E0927 17:45:08.782676       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" pod="kube-system/kindnet-gls7h"
	I0927 17:45:08.782749       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.783276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:45:08.785675       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fc28a65-d0e3-476e-bc9e-ff4e9f2e85ac(kube-system/kube-proxy-z2tnx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z2tnx"
	E0927 17:45:08.785786       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" pod="kube-system/kube-proxy-z2tnx"
	I0927 17:45:08.785868       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:50:18.155530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:19.863327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0927 17:50:21.051607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0927 17:50:21.060222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:21.500061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0927 17:50:23.149599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0927 17:50:23.830522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:24.585372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0927 17:50:25.374331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0927 17:50:27.016842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0927 17:50:28.006310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:28.251498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0927 17:50:28.532605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0927 17:50:29.174725       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0927 17:50:29.343228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0927 17:50:31.274677       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 27 17:53:45 ha-748477 kubelet[1304]: E0927 17:53:45.643644    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459625642760918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:53:55 ha-748477 kubelet[1304]: E0927 17:53:55.507737    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:53:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:53:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:53:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:53:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:53:55 ha-748477 kubelet[1304]: E0927 17:53:55.647356    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459635646788579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:53:55 ha-748477 kubelet[1304]: E0927 17:53:55.647401    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459635646788579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:05 ha-748477 kubelet[1304]: E0927 17:54:05.650713    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459645648468348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:05 ha-748477 kubelet[1304]: E0927 17:54:05.651085    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459645648468348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:15 ha-748477 kubelet[1304]: E0927 17:54:15.653582    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459655652980420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:15 ha-748477 kubelet[1304]: E0927 17:54:15.653620    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459655652980420,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:25 ha-748477 kubelet[1304]: E0927 17:54:25.656577    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459665655851860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:25 ha-748477 kubelet[1304]: E0927 17:54:25.656980    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459665655851860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:35 ha-748477 kubelet[1304]: E0927 17:54:35.659321    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459675658773853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:35 ha-748477 kubelet[1304]: E0927 17:54:35.659754    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459675658773853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:45 ha-748477 kubelet[1304]: E0927 17:54:45.662452    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459685661635236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:45 ha-748477 kubelet[1304]: E0927 17:54:45.662477    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459685661635236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:55 ha-748477 kubelet[1304]: E0927 17:54:55.507805    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:54:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:54:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:54:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:54:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:54:55 ha-748477 kubelet[1304]: E0927 17:54:55.666365    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459695665691547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:54:55 ha-748477 kubelet[1304]: E0927 17:54:55.666422    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459695665691547,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 17:54:58.954032   40170 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19712-11184/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-748477 -n ha-748477
helpers_test.go:261: (dbg) Run:  kubectl --context ha-748477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (392.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-748477 stop -v=7 --alsologtostderr: exit status 82 (2m0.483172378s)

                                                
                                                
-- stdout --
	* Stopping node "ha-748477-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:55:18.888849   40612 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:55:18.888960   40612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:55:18.888969   40612 out.go:358] Setting ErrFile to fd 2...
	I0927 17:55:18.888973   40612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:55:18.889174   40612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:55:18.889424   40612 out.go:352] Setting JSON to false
	I0927 17:55:18.889504   40612 mustload.go:65] Loading cluster: ha-748477
	I0927 17:55:18.889869   40612 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:55:18.889953   40612 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:55:18.890126   40612 mustload.go:65] Loading cluster: ha-748477
	I0927 17:55:18.890253   40612 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:55:18.890281   40612 stop.go:39] StopHost: ha-748477-m04
	I0927 17:55:18.890685   40612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:55:18.890744   40612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:55:18.907400   40612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0927 17:55:18.907943   40612 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:55:18.908521   40612 main.go:141] libmachine: Using API Version  1
	I0927 17:55:18.908548   40612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:55:18.908909   40612 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:55:18.911138   40612 out.go:177] * Stopping node "ha-748477-m04"  ...
	I0927 17:55:18.912508   40612 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0927 17:55:18.912558   40612 main.go:141] libmachine: (ha-748477-m04) Calling .DriverName
	I0927 17:55:18.912785   40612 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0927 17:55:18.912815   40612 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHHostname
	I0927 17:55:18.915529   40612 main.go:141] libmachine: (ha-748477-m04) DBG | domain ha-748477-m04 has defined MAC address 52:54:00:b6:6c:3f in network mk-ha-748477
	I0927 17:55:18.916145   40612 main.go:141] libmachine: (ha-748477-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:6c:3f", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:54:46 +0000 UTC Type:0 Mac:52:54:00:b6:6c:3f Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:ha-748477-m04 Clientid:01:52:54:00:b6:6c:3f}
	I0927 17:55:18.916185   40612 main.go:141] libmachine: (ha-748477-m04) DBG | domain ha-748477-m04 has defined IP address 192.168.39.37 and MAC address 52:54:00:b6:6c:3f in network mk-ha-748477
	I0927 17:55:18.916373   40612 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHPort
	I0927 17:55:18.916560   40612 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHKeyPath
	I0927 17:55:18.916712   40612 main.go:141] libmachine: (ha-748477-m04) Calling .GetSSHUsername
	I0927 17:55:18.916835   40612 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477-m04/id_rsa Username:docker}
	I0927 17:55:19.009062   40612 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0927 17:55:19.061879   40612 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0927 17:55:19.114050   40612 main.go:141] libmachine: Stopping "ha-748477-m04"...
	I0927 17:55:19.114082   40612 main.go:141] libmachine: (ha-748477-m04) Calling .GetState
	I0927 17:55:19.115739   40612 main.go:141] libmachine: (ha-748477-m04) Calling .Stop
	I0927 17:55:19.119602   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 0/120
	I0927 17:55:20.121085   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 1/120
	I0927 17:55:21.122473   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 2/120
	I0927 17:55:22.124156   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 3/120
	I0927 17:55:23.125694   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 4/120
	I0927 17:55:24.127959   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 5/120
	I0927 17:55:25.129621   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 6/120
	I0927 17:55:26.131021   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 7/120
	I0927 17:55:27.132781   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 8/120
	I0927 17:55:28.134164   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 9/120
	I0927 17:55:29.136453   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 10/120
	I0927 17:55:30.138432   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 11/120
	I0927 17:55:31.139830   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 12/120
	I0927 17:55:32.141262   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 13/120
	I0927 17:55:33.143435   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 14/120
	I0927 17:55:34.145676   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 15/120
	I0927 17:55:35.147137   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 16/120
	I0927 17:55:36.148629   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 17/120
	I0927 17:55:37.150373   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 18/120
	I0927 17:55:38.151967   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 19/120
	I0927 17:55:39.154229   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 20/120
	I0927 17:55:40.155778   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 21/120
	I0927 17:55:41.157588   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 22/120
	I0927 17:55:42.159137   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 23/120
	I0927 17:55:43.160622   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 24/120
	I0927 17:55:44.162709   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 25/120
	I0927 17:55:45.164013   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 26/120
	I0927 17:55:46.165533   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 27/120
	I0927 17:55:47.166999   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 28/120
	I0927 17:55:48.169179   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 29/120
	I0927 17:55:49.171328   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 30/120
	I0927 17:55:50.172837   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 31/120
	I0927 17:55:51.174388   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 32/120
	I0927 17:55:52.175790   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 33/120
	I0927 17:55:53.177297   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 34/120
	I0927 17:55:54.179340   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 35/120
	I0927 17:55:55.180738   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 36/120
	I0927 17:55:56.181980   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 37/120
	I0927 17:55:57.183207   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 38/120
	I0927 17:55:58.184598   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 39/120
	I0927 17:55:59.186623   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 40/120
	I0927 17:56:00.188210   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 41/120
	I0927 17:56:01.189781   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 42/120
	I0927 17:56:02.191410   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 43/120
	I0927 17:56:03.192627   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 44/120
	I0927 17:56:04.194920   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 45/120
	I0927 17:56:05.196500   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 46/120
	I0927 17:56:06.198174   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 47/120
	I0927 17:56:07.200715   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 48/120
	I0927 17:56:08.202008   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 49/120
	I0927 17:56:09.204223   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 50/120
	I0927 17:56:10.205613   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 51/120
	I0927 17:56:11.207622   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 52/120
	I0927 17:56:12.209250   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 53/120
	I0927 17:56:13.211619   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 54/120
	I0927 17:56:14.213937   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 55/120
	I0927 17:56:15.215414   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 56/120
	I0927 17:56:16.217060   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 57/120
	I0927 17:56:17.218577   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 58/120
	I0927 17:56:18.219813   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 59/120
	I0927 17:56:19.222529   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 60/120
	I0927 17:56:20.223986   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 61/120
	I0927 17:56:21.225822   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 62/120
	I0927 17:56:22.227113   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 63/120
	I0927 17:56:23.228988   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 64/120
	I0927 17:56:24.231012   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 65/120
	I0927 17:56:25.232974   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 66/120
	I0927 17:56:26.234713   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 67/120
	I0927 17:56:27.236159   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 68/120
	I0927 17:56:28.237740   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 69/120
	I0927 17:56:29.240076   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 70/120
	I0927 17:56:30.241784   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 71/120
	I0927 17:56:31.243440   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 72/120
	I0927 17:56:32.245998   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 73/120
	I0927 17:56:33.247690   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 74/120
	I0927 17:56:34.249953   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 75/120
	I0927 17:56:35.251935   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 76/120
	I0927 17:56:36.253813   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 77/120
	I0927 17:56:37.255451   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 78/120
	I0927 17:56:38.257037   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 79/120
	I0927 17:56:39.259496   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 80/120
	I0927 17:56:40.261309   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 81/120
	I0927 17:56:41.262766   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 82/120
	I0927 17:56:42.264284   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 83/120
	I0927 17:56:43.265570   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 84/120
	I0927 17:56:44.267373   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 85/120
	I0927 17:56:45.269227   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 86/120
	I0927 17:56:46.270588   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 87/120
	I0927 17:56:47.272275   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 88/120
	I0927 17:56:48.273545   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 89/120
	I0927 17:56:49.275497   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 90/120
	I0927 17:56:50.277094   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 91/120
	I0927 17:56:51.278534   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 92/120
	I0927 17:56:52.280224   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 93/120
	I0927 17:56:53.281662   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 94/120
	I0927 17:56:54.283653   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 95/120
	I0927 17:56:55.285005   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 96/120
	I0927 17:56:56.286273   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 97/120
	I0927 17:56:57.288053   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 98/120
	I0927 17:56:58.289481   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 99/120
	I0927 17:56:59.291872   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 100/120
	I0927 17:57:00.293356   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 101/120
	I0927 17:57:01.294552   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 102/120
	I0927 17:57:02.295928   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 103/120
	I0927 17:57:03.297423   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 104/120
	I0927 17:57:04.299696   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 105/120
	I0927 17:57:05.301111   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 106/120
	I0927 17:57:06.302422   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 107/120
	I0927 17:57:07.303799   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 108/120
	I0927 17:57:08.305160   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 109/120
	I0927 17:57:09.307525   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 110/120
	I0927 17:57:10.309079   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 111/120
	I0927 17:57:11.310554   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 112/120
	I0927 17:57:12.311967   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 113/120
	I0927 17:57:13.313248   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 114/120
	I0927 17:57:14.315433   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 115/120
	I0927 17:57:15.317143   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 116/120
	I0927 17:57:16.318543   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 117/120
	I0927 17:57:17.319845   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 118/120
	I0927 17:57:18.321390   40612 main.go:141] libmachine: (ha-748477-m04) Waiting for machine to stop 119/120
	I0927 17:57:19.322723   40612 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0927 17:57:19.322776   40612 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0927 17:57:19.324559   40612 out.go:201] 
	W0927 17:57:19.325636   40612 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0927 17:57:19.325654   40612 out.go:270] * 
	* 
	W0927 17:57:19.327930   40612 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 17:57:19.329458   40612 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-748477 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr: (18.884420252s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-748477 -n ha-748477
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 logs -n 25: (1.550534458s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m04 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp testdata/cp-test.txt                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477:/home/docker/cp-test_ha-748477-m04_ha-748477.txt                       |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477 sudo cat                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477.txt                                 |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m02:/home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m02 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m03:/home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n                                                                 | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | ha-748477-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-748477 ssh -n ha-748477-m03 sudo cat                                          | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC | 27 Sep 24 17:45 UTC |
	|         | /home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-748477 node stop m02 -v=7                                                     | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-748477 node start m02 -v=7                                                    | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-748477 -v=7                                                           | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-748477 -v=7                                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-748477 --wait=true -v=7                                                    | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:50 UTC | 27 Sep 24 17:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-748477                                                                | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:54 UTC |                     |
	| node    | ha-748477 node delete m03 -v=7                                                   | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:55 UTC | 27 Sep 24 17:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-748477 stop -v=7                                                              | ha-748477 | jenkins | v1.34.0 | 27 Sep 24 17:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 17:50:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 17:50:30.424385   38757 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:50:30.424514   38757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:50:30.424523   38757 out.go:358] Setting ErrFile to fd 2...
	I0927 17:50:30.424527   38757 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:50:30.425271   38757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:50:30.426831   38757 out.go:352] Setting JSON to false
	I0927 17:50:30.428150   38757 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5575,"bootTime":1727453855,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:50:30.428295   38757 start.go:139] virtualization: kvm guest
	I0927 17:50:30.430588   38757 out.go:177] * [ha-748477] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:50:30.432316   38757 notify.go:220] Checking for updates...
	I0927 17:50:30.432344   38757 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:50:30.434073   38757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:50:30.435876   38757 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:50:30.437587   38757 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:50:30.439384   38757 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:50:30.441049   38757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:50:30.443422   38757 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:50:30.443558   38757 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:50:30.444318   38757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:50:30.444365   38757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:50:30.460317   38757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0927 17:50:30.460923   38757 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:50:30.461624   38757 main.go:141] libmachine: Using API Version  1
	I0927 17:50:30.461658   38757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:50:30.462039   38757 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:50:30.462301   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:50:30.502146   38757 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 17:50:30.503553   38757 start.go:297] selected driver: kvm2
	I0927 17:50:30.503568   38757 start.go:901] validating driver "kvm2" against &{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:50:30.503781   38757 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:50:30.504226   38757 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:50:30.504312   38757 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 17:50:30.520160   38757 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 17:50:30.520901   38757 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 17:50:30.520937   38757 cni.go:84] Creating CNI manager for ""
	I0927 17:50:30.520989   38757 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 17:50:30.521054   38757 start.go:340] cluster config:
	{Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:50:30.521197   38757 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 17:50:30.523468   38757 out.go:177] * Starting "ha-748477" primary control-plane node in "ha-748477" cluster
	I0927 17:50:30.524672   38757 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:50:30.524732   38757 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 17:50:30.524743   38757 cache.go:56] Caching tarball of preloaded images
	I0927 17:50:30.524850   38757 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 17:50:30.524863   38757 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 17:50:30.524985   38757 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/config.json ...
	I0927 17:50:30.525190   38757 start.go:360] acquireMachinesLock for ha-748477: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 17:50:30.525232   38757 start.go:364] duration metric: took 23.245µs to acquireMachinesLock for "ha-748477"
	I0927 17:50:30.525273   38757 start.go:96] Skipping create...Using existing machine configuration
	I0927 17:50:30.525280   38757 fix.go:54] fixHost starting: 
	I0927 17:50:30.525533   38757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:50:30.525565   38757 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:50:30.540401   38757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38175
	I0927 17:50:30.540876   38757 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:50:30.541417   38757 main.go:141] libmachine: Using API Version  1
	I0927 17:50:30.541439   38757 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:50:30.541816   38757 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:50:30.542007   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:50:30.542167   38757 main.go:141] libmachine: (ha-748477) Calling .GetState
	I0927 17:50:30.544015   38757 fix.go:112] recreateIfNeeded on ha-748477: state=Running err=<nil>
	W0927 17:50:30.544047   38757 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 17:50:30.546174   38757 out.go:177] * Updating the running kvm2 "ha-748477" VM ...
	I0927 17:50:30.547622   38757 machine.go:93] provisionDockerMachine start ...
	I0927 17:50:30.547649   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:50:30.547909   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.550639   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.551156   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.551186   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.551332   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:30.551510   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.551672   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.551789   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:30.551946   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:30.552213   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:30.552226   38757 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 17:50:30.661058   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:50:30.661099   38757 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:50:30.661376   38757 buildroot.go:166] provisioning hostname "ha-748477"
	I0927 17:50:30.661401   38757 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:50:30.661591   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.664371   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.664860   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.664894   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.665117   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:30.665315   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.665502   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.665651   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:30.665840   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:30.666005   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:30.666020   38757 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-748477 && echo "ha-748477" | sudo tee /etc/hostname
	I0927 17:50:30.783842   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-748477
	
	I0927 17:50:30.783872   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.786699   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.787092   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.787122   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.787372   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:30.787572   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.787763   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:30.787886   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:30.788044   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:30.788237   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:30.788259   38757 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-748477' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-748477/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-748477' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 17:50:30.896561   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 17:50:30.896591   38757 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 17:50:30.896615   38757 buildroot.go:174] setting up certificates
	I0927 17:50:30.896626   38757 provision.go:84] configureAuth start
	I0927 17:50:30.896634   38757 main.go:141] libmachine: (ha-748477) Calling .GetMachineName
	I0927 17:50:30.897036   38757 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:50:30.902127   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.902758   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.902782   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.903088   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.907150   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.907576   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:30.907601   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:30.908171   38757 provision.go:143] copyHostCerts
	I0927 17:50:30.908222   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:50:30.908256   38757 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 17:50:30.908275   38757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 17:50:30.908348   38757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 17:50:30.908484   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:50:30.908527   38757 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 17:50:30.908538   38757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 17:50:30.908585   38757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 17:50:30.908655   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:50:30.908672   38757 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 17:50:30.908678   38757 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 17:50:30.908701   38757 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 17:50:30.908778   38757 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.ha-748477 san=[127.0.0.1 192.168.39.217 ha-748477 localhost minikube]
	I0927 17:50:30.996703   38757 provision.go:177] copyRemoteCerts
	I0927 17:50:30.996774   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 17:50:30.996797   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:30.999703   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.000154   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:31.000186   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.000318   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:31.000502   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:31.000714   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:31.000921   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:50:31.081756   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 17:50:31.081829   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 17:50:31.108902   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 17:50:31.108996   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0927 17:50:31.135949   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 17:50:31.136043   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 17:50:31.162580   38757 provision.go:87] duration metric: took 265.939805ms to configureAuth
	I0927 17:50:31.162614   38757 buildroot.go:189] setting minikube options for container-runtime
	I0927 17:50:31.162871   38757 config.go:182] Loaded profile config "ha-748477": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:50:31.162957   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:50:31.165683   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.166101   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:50:31.166143   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:50:31.166345   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:50:31.166557   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:31.166693   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:50:31.166826   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:50:31.167003   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:50:31.167172   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:50:31.167186   38757 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 17:52:02.027936   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 17:52:02.027984   38757 machine.go:96] duration metric: took 1m31.480344538s to provisionDockerMachine
	I0927 17:52:02.028004   38757 start.go:293] postStartSetup for "ha-748477" (driver="kvm2")
	I0927 17:52:02.028025   38757 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 17:52:02.028054   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.028518   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 17:52:02.028557   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.031876   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.032328   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.032358   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.032553   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.032736   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.032888   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.033041   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:52:02.114186   38757 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 17:52:02.118480   38757 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 17:52:02.118519   38757 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 17:52:02.118592   38757 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 17:52:02.118700   38757 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 17:52:02.118714   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 17:52:02.118813   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 17:52:02.127965   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:52:02.153036   38757 start.go:296] duration metric: took 125.017384ms for postStartSetup
	I0927 17:52:02.153081   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.153424   38757 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0927 17:52:02.153453   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.156361   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.156926   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.156959   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.157179   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.157388   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.157730   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.157934   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	W0927 17:52:02.237106   38757 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0927 17:52:02.237146   38757 fix.go:56] duration metric: took 1m31.711865222s for fixHost
	I0927 17:52:02.237182   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.240043   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.240421   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.240447   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.240637   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.240852   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.241064   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.241228   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.241412   38757 main.go:141] libmachine: Using SSH client type: native
	I0927 17:52:02.241580   38757 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0927 17:52:02.241589   38757 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 17:52:02.339282   38757 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727459522.305058331
	
	I0927 17:52:02.339315   38757 fix.go:216] guest clock: 1727459522.305058331
	I0927 17:52:02.339324   38757 fix.go:229] Guest: 2024-09-27 17:52:02.305058331 +0000 UTC Remote: 2024-09-27 17:52:02.237163091 +0000 UTC m=+91.848711143 (delta=67.89524ms)
	I0927 17:52:02.339381   38757 fix.go:200] guest clock delta is within tolerance: 67.89524ms
	I0927 17:52:02.339389   38757 start.go:83] releasing machines lock for "ha-748477", held for 1m31.814120266s
	I0927 17:52:02.339419   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.339685   38757 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:52:02.342515   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.342976   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.343013   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.343049   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.343661   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.343886   38757 main.go:141] libmachine: (ha-748477) Calling .DriverName
	I0927 17:52:02.344001   38757 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 17:52:02.344031   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.344089   38757 ssh_runner.go:195] Run: cat /version.json
	I0927 17:52:02.344112   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHHostname
	I0927 17:52:02.346710   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347057   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347106   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.347131   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347266   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.347468   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.347614   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.347661   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:02.347683   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:02.347775   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:52:02.347883   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHPort
	I0927 17:52:02.348055   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHKeyPath
	I0927 17:52:02.348222   38757 main.go:141] libmachine: (ha-748477) Calling .GetSSHUsername
	I0927 17:52:02.348354   38757 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/ha-748477/id_rsa Username:docker}
	I0927 17:52:02.463983   38757 ssh_runner.go:195] Run: systemctl --version
	I0927 17:52:02.470446   38757 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 17:52:02.632072   38757 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 17:52:02.640064   38757 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 17:52:02.640131   38757 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 17:52:02.650297   38757 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 17:52:02.650321   38757 start.go:495] detecting cgroup driver to use...
	I0927 17:52:02.650387   38757 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 17:52:02.667376   38757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 17:52:02.681617   38757 docker.go:217] disabling cri-docker service (if available) ...
	I0927 17:52:02.681684   38757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 17:52:02.695342   38757 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 17:52:02.709156   38757 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 17:52:02.862957   38757 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 17:52:03.007205   38757 docker.go:233] disabling docker service ...
	I0927 17:52:03.007277   38757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 17:52:03.024936   38757 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 17:52:03.038538   38757 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 17:52:03.188594   38757 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 17:52:03.339738   38757 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 17:52:03.354004   38757 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 17:52:03.373390   38757 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 17:52:03.373457   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.384341   38757 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 17:52:03.384421   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.395736   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.406771   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.417229   38757 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 17:52:03.428906   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.441279   38757 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.452936   38757 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 17:52:03.464225   38757 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 17:52:03.474300   38757 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 17:52:03.484185   38757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:52:03.635522   38757 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 17:52:03.893259   38757 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 17:52:03.893343   38757 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 17:52:03.898666   38757 start.go:563] Will wait 60s for crictl version
	I0927 17:52:03.898727   38757 ssh_runner.go:195] Run: which crictl
	I0927 17:52:03.902533   38757 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 17:52:03.939900   38757 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 17:52:03.939996   38757 ssh_runner.go:195] Run: crio --version
	I0927 17:52:03.969560   38757 ssh_runner.go:195] Run: crio --version
	I0927 17:52:04.002061   38757 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 17:52:04.003292   38757 main.go:141] libmachine: (ha-748477) Calling .GetIP
	I0927 17:52:04.005988   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:04.006474   38757 main.go:141] libmachine: (ha-748477) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:7b:81", ip: ""} in network mk-ha-748477: {Iface:virbr1 ExpiryTime:2024-09-27 18:41:25 +0000 UTC Type:0 Mac:52:54:00:cf:7b:81 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-748477 Clientid:01:52:54:00:cf:7b:81}
	I0927 17:52:04.006504   38757 main.go:141] libmachine: (ha-748477) DBG | domain ha-748477 has defined IP address 192.168.39.217 and MAC address 52:54:00:cf:7b:81 in network mk-ha-748477
	I0927 17:52:04.006716   38757 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 17:52:04.011889   38757 kubeadm.go:883] updating cluster {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 17:52:04.012055   38757 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 17:52:04.012107   38757 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:52:04.056973   38757 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:52:04.056996   38757 crio.go:433] Images already preloaded, skipping extraction
	I0927 17:52:04.057042   38757 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 17:52:04.092033   38757 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 17:52:04.092064   38757 cache_images.go:84] Images are preloaded, skipping loading
	I0927 17:52:04.092076   38757 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.1 crio true true} ...
	I0927 17:52:04.092229   38757 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-748477 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 17:52:04.092322   38757 ssh_runner.go:195] Run: crio config
	I0927 17:52:04.145573   38757 cni.go:84] Creating CNI manager for ""
	I0927 17:52:04.145603   38757 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0927 17:52:04.145612   38757 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 17:52:04.145638   38757 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-748477 NodeName:ha-748477 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 17:52:04.145779   38757 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-748477"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 17:52:04.145807   38757 kube-vip.go:115] generating kube-vip config ...
	I0927 17:52:04.145847   38757 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0927 17:52:04.157586   38757 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0927 17:52:04.157734   38757 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0927 17:52:04.157802   38757 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 17:52:04.168157   38757 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 17:52:04.168219   38757 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0927 17:52:04.178726   38757 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0927 17:52:04.195170   38757 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 17:52:04.213194   38757 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 17:52:04.231689   38757 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0927 17:52:04.250518   38757 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0927 17:52:04.255638   38757 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 17:52:04.399700   38757 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 17:52:04.414795   38757 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477 for IP: 192.168.39.217
	I0927 17:52:04.414817   38757 certs.go:194] generating shared ca certs ...
	I0927 17:52:04.414840   38757 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:52:04.415014   38757 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 17:52:04.415056   38757 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 17:52:04.415063   38757 certs.go:256] generating profile certs ...
	I0927 17:52:04.415130   38757 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/client.key
	I0927 17:52:04.415155   38757 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601
	I0927 17:52:04.415175   38757 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217 192.168.39.58 192.168.39.225 192.168.39.254]
	I0927 17:52:04.603809   38757 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601 ...
	I0927 17:52:04.603848   38757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601: {Name:mk1174f2e9d4ef80315691684af9396502bb75fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:52:04.604016   38757 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601 ...
	I0927 17:52:04.604030   38757 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601: {Name:mkd8a32d0d2e01a5028c1808f38e911c66423418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 17:52:04.604101   38757 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt.8a76b601 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt
	I0927 17:52:04.604267   38757 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key.8a76b601 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key
	I0927 17:52:04.604397   38757 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key
	I0927 17:52:04.604411   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 17:52:04.604424   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 17:52:04.604435   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 17:52:04.604447   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 17:52:04.604457   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 17:52:04.604466   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 17:52:04.604483   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 17:52:04.604492   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 17:52:04.604537   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 17:52:04.604562   38757 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 17:52:04.604569   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 17:52:04.604597   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 17:52:04.604624   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 17:52:04.604645   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 17:52:04.604681   38757 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 17:52:04.604705   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.604728   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.604745   38757 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.605392   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 17:52:04.631025   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 17:52:04.657046   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 17:52:04.680727   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 17:52:04.704625   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0927 17:52:04.728489   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 17:52:04.752645   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 17:52:04.777694   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/ha-748477/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 17:52:04.801729   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 17:52:04.825565   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 17:52:04.849024   38757 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 17:52:04.873850   38757 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 17:52:04.891129   38757 ssh_runner.go:195] Run: openssl version
	I0927 17:52:04.897311   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 17:52:04.909302   38757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.913855   38757 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.913912   38757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 17:52:04.919615   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 17:52:04.929082   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 17:52:04.940922   38757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.945226   38757 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.945284   38757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 17:52:04.950859   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 17:52:04.960123   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 17:52:04.970748   38757 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.975025   38757 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.975086   38757 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 17:52:04.980279   38757 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 17:52:04.989244   38757 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 17:52:04.993624   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 17:52:04.999189   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 17:52:05.005061   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 17:52:05.010556   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 17:52:05.016104   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 17:52:05.021587   38757 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 17:52:05.027294   38757 kubeadm.go:392] StartCluster: {Name:ha-748477 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-748477 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.58 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.37 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:52:05.027474   38757 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 17:52:05.027566   38757 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 17:52:05.064404   38757 cri.go:89] found id: "1b9410286a4cec350755db66e63c86ea609da094bebc93494e31b00cd3561840"
	I0927 17:52:05.064430   38757 cri.go:89] found id: "04ef7eba61dfa4987959a431a6b525f4dc245bdc9ac5a306d7b94035c30a845d"
	I0927 17:52:05.064435   38757 cri.go:89] found id: "16a2ebbf8d55df913983c5d061e2cfdd9a1294deb31db244d2c431dcc794336f"
	I0927 17:52:05.064440   38757 cri.go:89] found id: "d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777"
	I0927 17:52:05.064443   38757 cri.go:89] found id: "de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa"
	I0927 17:52:05.064447   38757 cri.go:89] found id: "a7ccc536c4df9efa8c8d0f12b468ad168535f2bddc99ce122723498b83037741"
	I0927 17:52:05.064451   38757 cri.go:89] found id: "cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f"
	I0927 17:52:05.064455   38757 cri.go:89] found id: "42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b"
	I0927 17:52:05.064459   38757 cri.go:89] found id: "4caed5948aafecc97b85436379853f42179e0e54d7fe68a1d4b8a2f480c6d9f7"
	I0927 17:52:05.064467   38757 cri.go:89] found id: "d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756"
	I0927 17:52:05.064485   38757 cri.go:89] found id: "72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771"
	I0927 17:52:05.064504   38757 cri.go:89] found id: "c7ca45fc1dbb1336667ced635a7cfab5898dd31a9696851af6d8d33f2f90ba36"
	I0927 17:52:05.064509   38757 cri.go:89] found id: "657c5e75829c7fbb91729948fc7e9a4b7aa9fab3320a8b1aa6d3bc443c4ae8bf"
	I0927 17:52:05.064514   38757 cri.go:89] found id: ""
	I0927 17:52:05.064568   38757 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.791254471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459858791228918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feca55fa-7844-407a-b314-87de96dbde40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.791790053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=802cafba-6d00-4db7-aa00-8dd0195151ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.791856231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=802cafba-6d00-4db7-aa00-8dd0195151ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.792396014Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=802cafba-6d00-4db7-aa00-8dd0195151ec name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.832595914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cef2e59-61dd-41af-9fbf-4b65aafce4d9 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.832681792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cef2e59-61dd-41af-9fbf-4b65aafce4d9 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.836329171Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d6c241c-0723-41f8-bc23-9ab1fa323103 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.836753482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459858836733008,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d6c241c-0723-41f8-bc23-9ab1fa323103 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.837362596Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=baae6c87-c402-45f3-bcc6-5c63e39c74e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.837431368Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=baae6c87-c402-45f3-bcc6-5c63e39c74e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.837836025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=baae6c87-c402-45f3-bcc6-5c63e39c74e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.884036138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c1f0741-3dba-4f75-9a60-cce7cdcec142 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.884124324Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c1f0741-3dba-4f75-9a60-cce7cdcec142 name=/runtime.v1.RuntimeService/Version
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.885273676Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f046e7a5-b8a5-4ba8-8914-39d168b1c0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.886161347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459858886138645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f046e7a5-b8a5-4ba8-8914-39d168b1c0e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.886921469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=536edf41-0994-4c61-a45d-1606490e1c77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.886994819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=536edf41-0994-4c61-a45d-1606490e1c77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.887458114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=536edf41-0994-4c61-a45d-1606490e1c77 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.928122932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47876775-f1a7-4ece-b6da-c3882a283a5b name=/runtime.v1.RuntimeService/Version
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.928258447Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47876775-f1a7-4ece-b6da-c3882a283a5b name=/runtime.v1.RuntimeService/Version
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.929433155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d41e2029-1819-4952-b631-3c763320dc68 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.930043828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459858930012756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d41e2029-1819-4952-b631-3c763320dc68 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.930706682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36de7751-d246-47bd-8f21-8af275056f9a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.930779555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36de7751-d246-47bd-8f21-8af275056f9a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 17:57:38 ha-748477 crio[3603]: time="2024-09-27 17:57:38.931269298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2d73744d0b9caada5fbb56755c2e9a44023378b6df6da8a43042c55a306b7bd8,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727459611490587657,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727459597485570942,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727459573491873933,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46,PodSandboxId:4b448aa75cf9e9e0ad9ba71b18dee9ee08eed39248d9da91e33d1d66e6b767cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727459565486581530,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 647e1f1a223aa05c0d6b5b0aa1c461da,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f9d172c1162701322db3069162fa7a20996e25b0a0c0cbc7c5886c97019a541,PodSandboxId:925e4ebbd3a1c6e62469167ff33bc8e8eeb3a4dcfa2ae6e95e845c91268d456c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727459563489954644,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5a708d-128c-492d-bff2-7efbfcc9396f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b4aceb6e02c8f914a8961f2cf785068c30ad37eb14920a70482368b7951ecbd,PodSandboxId:9fd92cb2c074ac05df597599e7cc9511e310f44643abf1dfd5aebe924131ede6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727459561813592242,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77106038b90e8b8f2d42dfab44780cf7ceeb084cf9dfbac82b9d73d75936eb98,PodSandboxId:17d84e5316278b1f5a759d81cf772d6f52a90b6ef332caa018cbcd32e848710e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1727459540512924285,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1844aa035b9f0a5bed113aab6037fd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010,PodSandboxId:009f57477683a97ecb0b6734c915e8d9b6a7979791d51f957b2f5acc2609945f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727459528375096058,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGra
cePeriod: 30,},},&Container{Id:eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598,PodSandboxId:fd6322271998caec7d37ee7b203aebdfe594288ef5dc3536c02615e44fcc9739,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727459528552405528,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fb
8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2,PodSandboxId:c5e973435243f33cc6c6c7907034c6fb6c1599c3e4cdffaaa4673de635d01e46,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528595104185,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a,PodSandboxId:112aab9f65c4334a41f83b8e3c08bc77c5e5560d0299cf2ff68506d826f23792,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727459528549004915,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510,PodSandboxId:153c492fceb24740ab5424fd6fcf8f4e8681f4a233e2d58b11531be45da5789b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727459528485541714,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-748477,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: b14aea5a97dfd5a2488f6e3ced308879,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398,PodSandboxId:530b499e046b2e6afe8d7adce63d16b4de66de1c6f20fc358c16efa551ae68a9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727459528335076315,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df3
5a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01,PodSandboxId:a75da9329992e35eef279fa1fd8ddc587405c8c782244f6374f22694b00275d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727459528321932163,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Ann
otations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d138d00329ae9e51a8df4da9d95bbf4705bd6144bc7ddeec89574895284c12,PodSandboxId:9af32827ca87e6451a5ef56a88c57d7e8153b88b924470b5f2984a179e1f1d74,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727459075503813939,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-j7gsn,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07233d33-34ed-44e8-a9d5-376e1860ca0c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa,PodSandboxId:4c986f9d250c302436454c2faa0f9d91b16ac890ce4811c92cef4c8b75af3710,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933152041181,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n99lr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec2d5b00-2422-4e07-a352-a47254a81408,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777,PodSandboxId:ce8d3fbc4ee431121977426135fa65c981aa619609cd279532024f3c926955fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727458933154287536,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qvp2z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61b875d4-dda7-465c-aff9-49e2eb8f5f9f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f,PodSandboxId:61f84fe579fbd1714cba66497d53e990fc7cc3b769dac89bff91580101540c7d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727458921106333831,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5wl4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc7f8df5-02d8-4ad5-a8e8-127335b9d228,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b,PodSandboxId:dc1e025d5f18b6906e30c477ab6e6c7b6d1fd45a27d87d3b58957d89ebb6bdcc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727458920839516229,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p76v9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ebfb1c9-64bb-47d1-962d-49573740e503,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771,PodSandboxId:9199f6af07950fb9da155ea64addeffdc2f1bdb6addc9604fb0590f433df0a3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727458909257349490,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ec1f007f86453df35a2f3141bc489b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756,PodSandboxId:f25008a681435c386989bc22da79780f9d2c52dfc2ee4bd1d34f0366069ed9fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1727458909294829885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-748477,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6983c6d4e8a67eea6f4983292eca43a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36de7751-d246-47bd-8f21-8af275056f9a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d73744d0b9ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   925e4ebbd3a1c       storage-provisioner
	608b8c4779818       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   3                   4b448aa75cf9e       kube-controller-manager-ha-748477
	77522b0e7a0f0       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   153c492fceb24       kube-apiserver-ha-748477
	32ada22da1620       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Exited              kube-controller-manager   2                   4b448aa75cf9e       kube-controller-manager-ha-748477
	6f9d172c11627       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   925e4ebbd3a1c       storage-provisioner
	8b4aceb6e02c8       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   9fd92cb2c074a       busybox-7dff88458-j7gsn
	77106038b90e8       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   17d84e5316278       kube-vip-ha-748477
	2fb8d4ad3bbe9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   c5e973435243f       coredns-7c65d6cfc9-qvp2z
	eaac309de683f       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   fd6322271998c       kindnet-5wl4m
	1c79692edbb51       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   112aab9f65c43       coredns-7c65d6cfc9-n99lr
	36a07f77582d1       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   153c492fceb24       kube-apiserver-ha-748477
	12d02855eee03       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   009f57477683a       kube-proxy-p76v9
	a286c5b0e6086       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   530b499e046b2       etcd-ha-748477
	8603d2b3b9d65       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   a75da9329992e       kube-scheduler-ha-748477
	82d138d00329a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   9af32827ca87e       busybox-7dff88458-j7gsn
	d07f02e11f879       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   ce8d3fbc4ee43       coredns-7c65d6cfc9-qvp2z
	de0f399d2276a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   4c986f9d250c3       coredns-7c65d6cfc9-n99lr
	cd62df5a50cfd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   61f84fe579fbd       kindnet-5wl4m
	42146256b0e01       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   dc1e025d5f18b       kube-proxy-p76v9
	d2acf98043067       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   f25008a681435       kube-scheduler-ha-748477
	72fe2a883c95c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   9199f6af07950       etcd-ha-748477
	
	
	==> coredns [1c79692edbb51f59a5d68c05f12b1c9544d53d72853a5fc566b8e0b27a694c4a] <==
	Trace[298206810]: [10.542795397s] [10.542795397s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:37078->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[833858064]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 17:52:20.600) (total time: 13131ms):
	Trace[833858064]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37076->10.96.0.1:443: read: connection reset by peer 13131ms (17:52:33.732)
	Trace[833858064]: [13.131845291s] [13.131845291s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:37076->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [2fb8d4ad3bbe9dfa1e397265b5bc3c7fa06902ac7287f2d5254e537109db5ac2] <==
	Trace[1748738603]: [10.001554689s] [10.001554689s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1208010800]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (27-Sep-2024 17:52:17.163) (total time: 10001ms):
	Trace[1208010800]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:52:27.164)
	Trace[1208010800]: [10.001417426s] [10.001417426s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52794->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52794->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52788->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:52788->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d07f02e11f879bac32a05e4e9404a91174ced3eadd05219f66f60843a3b3c777] <==
	[INFO] 10.244.2.2:33554 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065853s
	[INFO] 10.244.2.2:58628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000162914s
	[INFO] 10.244.1.2:38819 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129715s
	[INFO] 10.244.1.2:60816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097737s
	[INFO] 10.244.1.2:36546 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014954s
	[INFO] 10.244.1.2:33829 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081077s
	[INFO] 10.244.1.2:59687 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088947s
	[INFO] 10.244.0.4:40268 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120362s
	[INFO] 10.244.0.4:38614 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077477s
	[INFO] 10.244.0.4:40222 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068679s
	[INFO] 10.244.2.2:51489 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133892s
	[INFO] 10.244.1.2:34773 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000265454s
	[INFO] 10.244.0.4:56542 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000227377s
	[INFO] 10.244.0.4:38585 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133165s
	[INFO] 10.244.2.2:32823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133184s
	[INFO] 10.244.2.2:47801 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112308s
	[INFO] 10.244.2.2:52586 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000146231s
	[INFO] 10.244.1.2:50376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000194279s
	[INFO] 10.244.1.2:51296 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116551s
	[INFO] 10.244.1.2:45074 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000069954s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [de0f399d2276a581bd9c7484922f1219d13dbf57eb21d163fad47c9ff54ad0fa] <==
	[INFO] 10.244.0.4:36329 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000177364s
	[INFO] 10.244.0.4:33684 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001089s
	[INFO] 10.244.2.2:47662 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002007928s
	[INFO] 10.244.2.2:59058 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158193s
	[INFO] 10.244.2.2:40790 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001715411s
	[INFO] 10.244.2.2:48349 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153048s
	[INFO] 10.244.1.2:55724 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002121618s
	[INFO] 10.244.1.2:41603 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096809s
	[INFO] 10.244.1.2:57083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001631103s
	[INFO] 10.244.0.4:48117 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103399s
	[INFO] 10.244.2.2:56316 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155752s
	[INFO] 10.244.2.2:36039 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000172138s
	[INFO] 10.244.2.2:39197 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113674s
	[INFO] 10.244.1.2:59834 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130099s
	[INFO] 10.244.1.2:54472 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087078s
	[INFO] 10.244.1.2:42463 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079936s
	[INFO] 10.244.0.4:58994 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021944s
	[INFO] 10.244.0.4:50757 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135494s
	[INFO] 10.244.2.2:35416 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000170114s
	[INFO] 10.244.1.2:50172 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011348s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-748477
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T17_41_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:41:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:57:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:41:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:52:58 +0000   Fri, 27 Sep 2024 17:42:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    ha-748477
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 492d2104e50247c88ce564105fa6e436
	  System UUID:                492d2104-e502-47c8-8ce5-64105fa6e436
	  Boot ID:                    e44f404a-867d-4f4e-a185-458196aac718
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j7gsn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-n99lr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-qvp2z             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-748477                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-5wl4m                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-748477             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-748477    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-p76v9                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-748477             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-748477                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m48s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-748477 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-748477 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-748477 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-748477 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   NodeNotReady             5m55s (x2 over 6m20s)  kubelet          Node ha-748477 status is now: NodeNotReady
	  Warning  ContainerGCFailed        5m44s (x2 over 6m44s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-748477 event: Registered Node ha-748477 in Controller
	
	
	Name:               ha-748477-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_42_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:42:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:57:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 17:54:07 +0000   Fri, 27 Sep 2024 17:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    ha-748477-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a797c0b98fa454a9290261a4120ee96
	  System UUID:                1a797c0b-98fa-454a-9290-261a4120ee96
	  Boot ID:                    34503aed-ddd2-4580-b284-b4db7673b25e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xmqtg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-748477-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-r9smp                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-748477-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-748477-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-kxwmh                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-748477-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-748477-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m17s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-748477-m02 status is now: NodeNotReady
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node ha-748477-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node ha-748477-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-748477-m02 event: Registered Node ha-748477-m02 in Controller
	
	
	Name:               ha-748477-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-748477-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=ha-748477
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T17_45_09_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 17:45:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-748477-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 17:55:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:55:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:55:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:55:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 17:54:51 +0000   Fri, 27 Sep 2024 17:55:53 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.37
	  Hostname:    ha-748477-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53bc6a6bc9f74a04882f5b53ace38c50
	  System UUID:                53bc6a6b-c9f7-4a04-882f-5b53ace38c50
	  Boot ID:                    73e1f0a4-9f56-44d2-8d04-b202848d2d56
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-49bvc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-8kdps              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-t92jl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-748477-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           4m20s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-748477-m04 event: Registered Node ha-748477-m04 in Controller
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-748477-m04 has been rebooted, boot id: 73e1f0a4-9f56-44d2-8d04-b202848d2d56
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-748477-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-748477-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m48s                  kubelet          Node ha-748477-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 4m7s)    node-controller  Node ha-748477-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[ +12.496309] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.056667] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051200] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.195115] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.125330] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.279617] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +3.856213] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +4.390156] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.062929] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.000255] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +0.085204] kauditd_printk_skb: 79 callbacks suppressed
	[Sep27 17:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.205900] kauditd_printk_skb: 38 callbacks suppressed
	[ +42.959337] kauditd_printk_skb: 26 callbacks suppressed
	[Sep27 17:52] systemd-fstab-generator[3513]: Ignoring "noauto" option for root device
	[  +0.147761] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.183677] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.144373] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.298680] systemd-fstab-generator[3579]: Ignoring "noauto" option for root device
	[  +0.767359] systemd-fstab-generator[3688]: Ignoring "noauto" option for root device
	[  +3.619289] kauditd_printk_skb: 122 callbacks suppressed
	[  +7.958160] kauditd_printk_skb: 85 callbacks suppressed
	[ +15.085845] kauditd_printk_skb: 1 callbacks suppressed
	[ +15.822614] kauditd_printk_skb: 10 callbacks suppressed
	[Sep27 17:53] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [72fe2a883c95c1a39ddbef4cd363e83595700101922f52af2e5132409aa44771] <==
	{"level":"warn","ts":"2024-09-27T17:50:31.296073Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-27T17:50:23.980488Z","time spent":"7.315572432s","remote":"127.0.0.1:37338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" limit:500 "}
	2024/09/27 17:50:31 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/27 17:50:31 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-27T17:50:31.552312Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17437254086752604898,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-27T17:50:31.571817Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T17:50:31.571883Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.217:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T17:50:31.573825Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"a09c9983ac28f1fd","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-27T17:50:31.574056Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574104Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574195Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574305Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574401Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574483Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574523Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"eca287baca66ada2"}
	{"level":"info","ts":"2024-09-27T17:50:31.574532Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574542Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574584Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574692Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574753Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574818Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.574851Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:50:31.577772Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-09-27T17:50:31.577872Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.217:2380"}
	{"level":"info","ts":"2024-09-27T17:50:31.577894Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-748477","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
	{"level":"warn","ts":"2024-09-27T17:50:31.577879Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.021483533s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	
	
	==> etcd [a286c5b0e6086b9aa72f50156ed9e1b2d8b9ada389c71d6556aa86e0d442a398] <==
	{"level":"info","ts":"2024-09-27T17:54:12.453370Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.453399Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.468287Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"a09c9983ac28f1fd","to":"90dcf8742efcd955","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-27T17:54:12.468337Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.471665Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:54:12.472961Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.681551Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.225:37082","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-27T17:55:05.697242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a09c9983ac28f1fd switched to configuration voters=(11573293933243462141 17051340375507512738)"}
	{"level":"info","ts":"2024-09-27T17:55:05.699535Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"8f39477865362797","local-member-id":"a09c9983ac28f1fd","removed-remote-peer-id":"90dcf8742efcd955","removed-remote-peer-urls":["https://192.168.39.225:2380"]}
	{"level":"info","ts":"2024-09-27T17:55:05.699624Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.700282Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"a09c9983ac28f1fd","removed-member-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.700375Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-09-27T17:55:05.700869Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:55:05.700913Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.701433Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:55:05.701491Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:55:05.701650Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.701864Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955","error":"context canceled"}
	{"level":"warn","ts":"2024-09-27T17:55:05.701913Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"90dcf8742efcd955","error":"failed to read 90dcf8742efcd955 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-27T17:55:05.701992Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.702839Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955","error":"context canceled"}
	{"level":"info","ts":"2024-09-27T17:55:05.702873Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"a09c9983ac28f1fd","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:55:05.702890Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"90dcf8742efcd955"}
	{"level":"info","ts":"2024-09-27T17:55:05.702921Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"a09c9983ac28f1fd","removed-remote-peer-id":"90dcf8742efcd955"}
	{"level":"warn","ts":"2024-09-27T17:55:05.723337Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.39.225:45040","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:57:39 up 16 min,  0 users,  load average: 0.35, 0.43, 0.31
	Linux ha-748477 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [cd62df5a50cfdc2566e3574cb02daf4c71cc4e71fc556b9c45e2c5fa7a37d04f] <==
	I0927 17:50:02.264819       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:50:02.264850       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:50:02.265025       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:50:02.265085       1 main.go:299] handling current node
	I0927 17:50:02.265097       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:50:02.265102       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:50:02.265156       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:50:02.265160       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:50:12.266791       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:50:12.266862       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:50:12.266979       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:50:12.266999       1 main.go:299] handling current node
	I0927 17:50:12.267011       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:50:12.267016       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:50:12.267058       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:50:12.267072       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:50:22.265686       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:50:22.265818       1 main.go:299] handling current node
	I0927 17:50:22.265856       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:50:22.265878       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:50:22.266053       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0927 17:50:22.266115       1 main.go:322] Node ha-748477-m03 has CIDR [10.244.2.0/24] 
	I0927 17:50:22.266299       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:50:22.266341       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	E0927 17:50:29.560841       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes)
	
	
	==> kindnet [eaac309de683fdcf3796760243e59eab2a3838c109bbdab31a7aa32ac3636598] <==
	I0927 17:56:49.744102       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:56:59.739667       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:56:59.739860       1 main.go:299] handling current node
	I0927 17:56:59.739916       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:56:59.739961       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:56:59.740159       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:56:59.740281       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:57:09.739461       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:57:09.739629       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:57:09.739835       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:57:09.739868       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:57:09.739928       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:57:09.739946       1 main.go:299] handling current node
	I0927 17:57:19.740557       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:57:19.740594       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:57:19.740733       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:57:19.740750       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:57:19.740817       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:57:19.740833       1 main.go:299] handling current node
	I0927 17:57:29.740608       1 main.go:295] Handling node with IPs: map[192.168.39.58:{}]
	I0927 17:57:29.740845       1 main.go:322] Node ha-748477-m02 has CIDR [10.244.1.0/24] 
	I0927 17:57:29.741060       1 main.go:295] Handling node with IPs: map[192.168.39.37:{}]
	I0927 17:57:29.741085       1 main.go:322] Node ha-748477-m04 has CIDR [10.244.3.0/24] 
	I0927 17:57:29.741262       1 main.go:295] Handling node with IPs: map[192.168.39.217:{}]
	I0927 17:57:29.741335       1 main.go:299] handling current node
	
	
	==> kube-apiserver [36a07f77582d116e3538241923c7d20198496f80904d8ac6bbf17ea2a9244510] <==
	I0927 17:52:09.087535       1 options.go:228] external host was not specified, using 192.168.39.217
	I0927 17:52:09.090154       1 server.go:142] Version: v1.31.1
	I0927 17:52:09.090347       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:52:10.131705       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0927 17:52:10.138493       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 17:52:10.142445       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 17:52:10.142603       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 17:52:10.142863       1 instance.go:232] Using reconciler: lease
	W0927 17:52:30.128951       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 17:52:30.129090       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0927 17:52:30.144579       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0927 17:52:30.144578       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [77522b0e7a0f0b8cd37a610866bc005ac70d8bb2e302018ff54257471fd808e3] <==
	I0927 17:52:55.405819       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 17:52:55.405850       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 17:52:55.407609       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 17:52:55.408052       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 17:52:55.408210       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 17:52:55.409552       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 17:52:55.416844       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 17:52:55.419990       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 17:52:55.427890       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 17:52:55.428102       1 aggregator.go:171] initial CRD sync complete...
	I0927 17:52:55.428254       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 17:52:55.428335       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 17:52:55.428369       1 cache.go:39] Caches are synced for autoregister controller
	I0927 17:52:55.429650       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 17:52:55.429725       1 policy_source.go:224] refreshing policies
	I0927 17:52:55.439291       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 17:52:55.443307       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0927 17:52:55.609339       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.225 192.168.39.58]
	I0927 17:52:55.610853       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 17:52:55.620320       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0927 17:52:55.623988       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0927 17:52:56.314580       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0927 17:52:56.853655       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.225 192.168.39.58]
	W0927 17:53:06.845790       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.58]
	W0927 17:55:16.855631       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217 192.168.39.58]
	
	
	==> kube-controller-manager [32ada22da16205176c641a383935b72c597efe67f126d0eeee5863d090c37d46] <==
	I0927 17:52:45.876361       1 serving.go:386] Generated self-signed cert in-memory
	I0927 17:52:46.193542       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0927 17:52:46.193646       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:52:46.195245       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0927 17:52:46.195372       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0927 17:52:46.195736       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0927 17:52:46.195833       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0927 17:52:56.201162       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: an error on the server (\"[+]ping ok\\n[+]log ok\\n[+]etcd ok\\n[+]poststarthook/start-apiserver-admission-initializer ok\\n[+]poststarthook/generic-apiserver-start-informers ok\\n[+]poststarthook/priority-and-fairness-config-consumer ok\\n[+]poststarthook/priority-and-fairness-filter ok\\n[+]poststarthook/storage-object-count-tracker-hook ok\\n[+]poststarthook/start-apiextensions-informers ok\\n[+]poststarthook/start-apiextensions-controllers ok\\n[+]poststarthook/crd-informer-synced ok\\n[+]poststarthook/start-system-namespaces-controller ok\\n[+]poststarthook/start-cluster-authentication-info-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-controller ok\\n[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok\\n[+]poststarthook/start-legacy-to
ken-tracking-controller ok\\n[+]poststarthook/start-service-ip-repair-controllers ok\\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\\n[+]poststarthook/priority-and-fairness-config-producer ok\\n[+]poststarthook/bootstrap-controller ok\\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\\n[+]poststarthook/start-kube-aggregator-informers ok\\n[+]poststarthook/apiservice-status-local-available-controller ok\\n[+]poststarthook/apiservice-status-remote-available-controller ok\\n[+]poststarthook/apiservice-registration-controller ok\\n[+]poststarthook/apiservice-discovery-controller ok\\n[+]poststarthook/kube-apiserver-autoregistration ok\\n[+]autoregister-completion ok\\n[+]poststarthook/apiservice-openapi-controller ok\\n[+]poststarthook/apiservice-openapiv3-controller ok\\nhealthz check failed\") has prevented the request from succeeding"
	
	
	==> kube-controller-manager [608b8c47798185568e958be27c9062dc1c200d56bc3e744532b4119f995f1500] <==
	I0927 17:55:53.043886       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:55:53.068046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:55:53.080940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.227229ms"
	I0927 17:55:53.081511       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="110.791µs"
	I0927 17:55:54.756732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	I0927 17:55:58.169860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-748477-m04"
	E0927 17:55:59.743717       1 gc_controller.go:151] "Failed to get node" err="node \"ha-748477-m03\" not found" logger="pod-garbage-collector-controller" node="ha-748477-m03"
	E0927 17:55:59.743750       1 gc_controller.go:151] "Failed to get node" err="node \"ha-748477-m03\" not found" logger="pod-garbage-collector-controller" node="ha-748477-m03"
	E0927 17:55:59.743758       1 gc_controller.go:151] "Failed to get node" err="node \"ha-748477-m03\" not found" logger="pod-garbage-collector-controller" node="ha-748477-m03"
	E0927 17:55:59.743763       1 gc_controller.go:151] "Failed to get node" err="node \"ha-748477-m03\" not found" logger="pod-garbage-collector-controller" node="ha-748477-m03"
	E0927 17:55:59.743772       1 gc_controller.go:151] "Failed to get node" err="node \"ha-748477-m03\" not found" logger="pod-garbage-collector-controller" node="ha-748477-m03"
	I0927 17:55:59.756994       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-748477-m03"
	I0927 17:55:59.798524       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-748477-m03"
	I0927 17:55:59.798650       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-66lb8"
	I0927 17:55:59.833590       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-66lb8"
	I0927 17:55:59.833685       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vwkqb"
	I0927 17:55:59.876618       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-vwkqb"
	I0927 17:55:59.876710       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-748477-m03"
	I0927 17:55:59.910471       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-748477-m03"
	I0927 17:55:59.910507       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-748477-m03"
	I0927 17:55:59.952554       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-748477-m03"
	I0927 17:55:59.952893       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-748477-m03"
	I0927 17:55:59.984343       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-748477-m03"
	I0927 17:55:59.984379       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-748477-m03"
	I0927 17:56:00.016466       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-748477-m03"
	
	
	==> kube-proxy [12d02855eee03fcde145a84cb6d25c22a327354d7d4ada47d9d43317d5d56010] <==
	E0927 17:52:51.139801       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-748477\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0927 17:52:51.139861       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0927 17:52:51.139917       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 17:52:51.171944       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 17:52:51.172005       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 17:52:51.172029       1 server_linux.go:169] "Using iptables Proxier"
	I0927 17:52:51.174409       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 17:52:51.174761       1 server.go:483] "Version info" version="v1.31.1"
	I0927 17:52:51.174786       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 17:52:51.176265       1 config.go:199] "Starting service config controller"
	I0927 17:52:51.176323       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 17:52:51.176416       1 config.go:105] "Starting endpoint slice config controller"
	I0927 17:52:51.176437       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 17:52:51.177241       1 config.go:328] "Starting node config controller"
	I0927 17:52:51.177268       1 shared_informer.go:313] Waiting for caches to sync for node config
	E0927 17:52:54.211906       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0927 17:52:54.212285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:52:54.212476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:52:54.212615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:52:54.212709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:52:54.212857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:52:54.214275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&limit=500&resourceVersion=0\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	I0927 17:52:56.577461       1 shared_informer.go:320] Caches are synced for node config
	I0927 17:52:56.577546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 17:52:56.577531       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [42146256b0e017eb1120c81fc4329c3a4ee37f5961ba13c3a97a922b899bfb4b] <==
	E0927 17:49:22.244787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:25.315993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:25.316592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:25.316234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:25.316842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:28.390113       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:28.390289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:31.461435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:31.461735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:34.532914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:34.533019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:34.533107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:34.533147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:43.748407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:43.748474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:46.821366       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:46.821438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:49:46.821766       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:49:46.821884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:50:02.180570       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:50:02.180846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-748477&resourceVersion=1692\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:50:05.252008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:50:05.252089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1706\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0927 17:50:14.468960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718": dial tcp 192.168.39.254:8443: connect: no route to host
	E0927 17:50:14.469157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1718\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [8603d2b3b9d65b3f3d0260892c9c462a408d4e9becf786492482dff11585fd01] <==
	W0927 17:52:48.886041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:48.886095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.217:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:48.916020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:48.916072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.217:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:49.167848       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.217:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:49.167917       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.217:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:49.759298       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:49.759354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.217:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:49.842031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:49.842089       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.217:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:50.510748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:50.511244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.217:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:51.721140       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:51.721347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.217:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:52.383560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:52.383641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.217:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:52.683118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:52.683387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.217:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	W0927 17:52:52.882319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.217:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.217:8443: connect: connection refused
	E0927 17:52:52.882430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.217:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8443: connect: connection refused" logger="UnhandledError"
	I0927 17:53:05.961286       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 17:55:02.290912       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-49bvc\": pod busybox-7dff88458-49bvc is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-49bvc" node="ha-748477-m04"
	E0927 17:55:02.296258       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bba20365-6a47-4b14-bbd2-5718ba14716d(default/busybox-7dff88458-49bvc) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-49bvc"
	E0927 17:55:02.296942       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-49bvc\": pod busybox-7dff88458-49bvc is already assigned to node \"ha-748477-m04\"" pod="default/busybox-7dff88458-49bvc"
	I0927 17:55:02.297232       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-49bvc" node="ha-748477-m04"
	
	
	==> kube-scheduler [d2acf980430670d1899db0d3170785bf66b4e1adfdc42c0e6bfffb62317c7756] <==
	E0927 17:44:31.312466       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-tpc4p\" not found" pod="default/busybox-7dff88458-tpc4p"
	E0927 17:45:08.782464       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.782636       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8041369a-60b6-46ac-ae40-2a232d799caf(kube-system/kindnet-gls7h) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-gls7h"
	E0927 17:45:08.782676       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gls7h\": pod kindnet-gls7h is already assigned to node \"ha-748477-m04\"" pod="kube-system/kindnet-gls7h"
	I0927 17:45:08.782749       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gls7h" node="ha-748477-m04"
	E0927 17:45:08.783276       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:45:08.785675       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 4fc28a65-d0e3-476e-bc9e-ff4e9f2e85ac(kube-system/kube-proxy-z2tnx) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-z2tnx"
	E0927 17:45:08.785786       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-z2tnx\": pod kube-proxy-z2tnx is already assigned to node \"ha-748477-m04\"" pod="kube-system/kube-proxy-z2tnx"
	I0927 17:45:08.785868       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-z2tnx" node="ha-748477-m04"
	E0927 17:50:18.155530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:19.863327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0927 17:50:21.051607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0927 17:50:21.060222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:21.500061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0927 17:50:23.149599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0927 17:50:23.830522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:24.585372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0927 17:50:25.374331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0927 17:50:27.016842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0927 17:50:28.006310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0927 17:50:28.251498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0927 17:50:28.532605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0927 17:50:29.174725       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0927 17:50:29.343228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0927 17:50:31.274677       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 27 17:56:05 ha-748477 kubelet[1304]: E0927 17:56:05.700925    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459765700544121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:05 ha-748477 kubelet[1304]: E0927 17:56:05.701469    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459765700544121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:15 ha-748477 kubelet[1304]: E0927 17:56:15.703018    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459775702473489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:15 ha-748477 kubelet[1304]: E0927 17:56:15.703110    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459775702473489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:25 ha-748477 kubelet[1304]: E0927 17:56:25.706374    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459785705496311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:25 ha-748477 kubelet[1304]: E0927 17:56:25.706416    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459785705496311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:35 ha-748477 kubelet[1304]: E0927 17:56:35.710010    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459795708673685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:35 ha-748477 kubelet[1304]: E0927 17:56:35.710383    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459795708673685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:45 ha-748477 kubelet[1304]: E0927 17:56:45.716212    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459805712104046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:45 ha-748477 kubelet[1304]: E0927 17:56:45.716269    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459805712104046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:55 ha-748477 kubelet[1304]: E0927 17:56:55.505482    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 17:56:55 ha-748477 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 17:56:55 ha-748477 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 17:56:55 ha-748477 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 17:56:55 ha-748477 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 17:56:55 ha-748477 kubelet[1304]: E0927 17:56:55.717984    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459815717675218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:56:55 ha-748477 kubelet[1304]: E0927 17:56:55.718035    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459815717675218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:05 ha-748477 kubelet[1304]: E0927 17:57:05.720055    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459825719149045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:05 ha-748477 kubelet[1304]: E0927 17:57:05.720933    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459825719149045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:15 ha-748477 kubelet[1304]: E0927 17:57:15.725430    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459835724665585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:15 ha-748477 kubelet[1304]: E0927 17:57:15.725480    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459835724665585,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:25 ha-748477 kubelet[1304]: E0927 17:57:25.727574    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459845727075170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:25 ha-748477 kubelet[1304]: E0927 17:57:25.727876    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459845727075170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:35 ha-748477 kubelet[1304]: E0927 17:57:35.729135    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459855728747962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 17:57:35 ha-748477 kubelet[1304]: E0927 17:57:35.729357    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727459855728747962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 17:57:38.527199   41204 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19712-11184/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-748477 -n ha-748477
helpers_test.go:261: (dbg) Run:  kubectl --context ha-748477 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (323.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-922780
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-922780
E0927 18:15:17.001347   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-922780: exit status 82 (2m1.849099081s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-922780-m03"  ...
	* Stopping node "multinode-922780-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-922780" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922780 --wait=true -v=8 --alsologtostderr
E0927 18:18:20.065472   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922780 --wait=true -v=8 --alsologtostderr: (3m19.447971528s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-922780
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-922780 -n multinode-922780
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 logs -n 25: (1.448508934s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4096433933/001/cp-test_multinode-922780-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780:/home/docker/cp-test_multinode-922780-m02_multinode-922780.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780 sudo cat                                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m02_multinode-922780.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03:/home/docker/cp-test_multinode-922780-m02_multinode-922780-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780-m03 sudo cat                                   | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m02_multinode-922780-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp testdata/cp-test.txt                                                | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4096433933/001/cp-test_multinode-922780-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780:/home/docker/cp-test_multinode-922780-m03_multinode-922780.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780 sudo cat                                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m03_multinode-922780.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02:/home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780-m02 sudo cat                                   | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-922780 node stop m03                                                          | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	| node    | multinode-922780 node start                                                             | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:14 UTC |                     |
	| stop    | -p multinode-922780                                                                     | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:14 UTC |                     |
	| start   | -p multinode-922780                                                                     | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:16 UTC | 27 Sep 24 18:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:16:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:16:06.633295   50980 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:16:06.633430   50980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:16:06.633439   50980 out.go:358] Setting ErrFile to fd 2...
	I0927 18:16:06.633444   50980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:16:06.633644   50980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:16:06.634208   50980 out.go:352] Setting JSON to false
	I0927 18:16:06.635199   50980 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7112,"bootTime":1727453855,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:16:06.635289   50980 start.go:139] virtualization: kvm guest
	I0927 18:16:06.638753   50980 out.go:177] * [multinode-922780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:16:06.640279   50980 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:16:06.640275   50980 notify.go:220] Checking for updates...
	I0927 18:16:06.643252   50980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:16:06.644829   50980 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:16:06.647425   50980 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:16:06.648815   50980 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:16:06.650269   50980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:16:06.652207   50980 config.go:182] Loaded profile config "multinode-922780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:16:06.652397   50980 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:16:06.653010   50980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:16:06.653088   50980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:16:06.668512   50980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0927 18:16:06.669079   50980 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:16:06.669679   50980 main.go:141] libmachine: Using API Version  1
	I0927 18:16:06.669699   50980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:16:06.670032   50980 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:16:06.670276   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:16:06.707706   50980 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:16:06.709291   50980 start.go:297] selected driver: kvm2
	I0927 18:16:06.709306   50980 start.go:901] validating driver "kvm2" against &{Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:16:06.709432   50980 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:16:06.709738   50980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:16:06.709826   50980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:16:06.724907   50980 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:16:06.725654   50980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:16:06.725698   50980 cni.go:84] Creating CNI manager for ""
	I0927 18:16:06.725772   50980 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 18:16:06.725850   50980 start.go:340] cluster config:
	{Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-922780 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:16:06.726016   50980 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:16:06.728226   50980 out.go:177] * Starting "multinode-922780" primary control-plane node in "multinode-922780" cluster
	I0927 18:16:06.729675   50980 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:16:06.729719   50980 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:16:06.729734   50980 cache.go:56] Caching tarball of preloaded images
	I0927 18:16:06.729853   50980 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:16:06.729867   50980 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:16:06.729972   50980 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/config.json ...
	I0927 18:16:06.730175   50980 start.go:360] acquireMachinesLock for multinode-922780: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:16:06.730219   50980 start.go:364] duration metric: took 26.275µs to acquireMachinesLock for "multinode-922780"
	I0927 18:16:06.730237   50980 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:16:06.730245   50980 fix.go:54] fixHost starting: 
	I0927 18:16:06.730500   50980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:16:06.730535   50980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:16:06.744874   50980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0927 18:16:06.745361   50980 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:16:06.745869   50980 main.go:141] libmachine: Using API Version  1
	I0927 18:16:06.745896   50980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:16:06.746316   50980 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:16:06.746517   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:16:06.746688   50980 main.go:141] libmachine: (multinode-922780) Calling .GetState
	I0927 18:16:06.748272   50980 fix.go:112] recreateIfNeeded on multinode-922780: state=Running err=<nil>
	W0927 18:16:06.748293   50980 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:16:06.750391   50980 out.go:177] * Updating the running kvm2 "multinode-922780" VM ...
	I0927 18:16:06.751826   50980 machine.go:93] provisionDockerMachine start ...
	I0927 18:16:06.751851   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:16:06.752060   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:06.754951   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.755498   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:06.755523   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.755723   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:06.755928   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.756072   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.756191   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:06.756375   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:06.756592   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:06.756604   50980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:16:06.859464   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922780
	
	I0927 18:16:06.859522   50980 main.go:141] libmachine: (multinode-922780) Calling .GetMachineName
	I0927 18:16:06.859780   50980 buildroot.go:166] provisioning hostname "multinode-922780"
	I0927 18:16:06.859804   50980 main.go:141] libmachine: (multinode-922780) Calling .GetMachineName
	I0927 18:16:06.859985   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:06.862471   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.862913   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:06.862938   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.863108   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:06.863320   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.863462   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.863616   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:06.863788   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:06.864009   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:06.864024   50980 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-922780 && echo "multinode-922780" | sudo tee /etc/hostname
	I0927 18:16:06.979859   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922780
	
	I0927 18:16:06.979896   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:06.983501   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.983995   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:06.984033   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.984333   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:06.984574   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.984785   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.984940   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:06.985113   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:06.985342   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:06.985366   50980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-922780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-922780/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-922780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:16:07.087973   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:16:07.088001   50980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:16:07.088038   50980 buildroot.go:174] setting up certificates
	I0927 18:16:07.088048   50980 provision.go:84] configureAuth start
	I0927 18:16:07.088056   50980 main.go:141] libmachine: (multinode-922780) Calling .GetMachineName
	I0927 18:16:07.088379   50980 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:16:07.091802   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.092226   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.092252   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.092559   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:07.095273   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.095692   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.095729   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.095883   50980 provision.go:143] copyHostCerts
	I0927 18:16:07.095910   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:16:07.095954   50980 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:16:07.095967   50980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:16:07.096070   50980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:16:07.096182   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:16:07.096201   50980 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:16:07.096208   50980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:16:07.096237   50980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:16:07.096325   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:16:07.096342   50980 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:16:07.096346   50980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:16:07.096369   50980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:16:07.096430   50980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.multinode-922780 san=[127.0.0.1 192.168.39.6 localhost minikube multinode-922780]
	I0927 18:16:07.226198   50980 provision.go:177] copyRemoteCerts
	I0927 18:16:07.226257   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:16:07.226279   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:07.229395   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.229777   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.229799   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.229979   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:07.230160   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:07.230313   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:07.230472   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:16:07.311548   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 18:16:07.311636   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:16:07.336113   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 18:16:07.336178   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0927 18:16:07.360468   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 18:16:07.360547   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 18:16:07.392888   50980 provision.go:87] duration metric: took 304.829582ms to configureAuth
	I0927 18:16:07.392915   50980 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:16:07.393149   50980 config.go:182] Loaded profile config "multinode-922780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:16:07.393240   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:07.396221   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.396661   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.396692   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.396918   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:07.397118   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:07.397275   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:07.397402   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:07.397544   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:07.397756   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:07.397779   50980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:17:38.184983   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:17:38.185011   50980 machine.go:96] duration metric: took 1m31.43316818s to provisionDockerMachine
	I0927 18:17:38.185059   50980 start.go:293] postStartSetup for "multinode-922780" (driver="kvm2")
	I0927 18:17:38.185075   50980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:17:38.185101   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.185497   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:17:38.185536   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.189013   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.189709   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.189731   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.190012   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.190216   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.190399   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.190556   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:17:38.269917   50980 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:17:38.273989   50980 command_runner.go:130] > NAME=Buildroot
	I0927 18:17:38.274007   50980 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0927 18:17:38.274011   50980 command_runner.go:130] > ID=buildroot
	I0927 18:17:38.274016   50980 command_runner.go:130] > VERSION_ID=2023.02.9
	I0927 18:17:38.274023   50980 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0927 18:17:38.274058   50980 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:17:38.274071   50980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:17:38.274126   50980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:17:38.274199   50980 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:17:38.274205   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 18:17:38.274282   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:17:38.283325   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:17:38.307463   50980 start.go:296] duration metric: took 122.386435ms for postStartSetup
	I0927 18:17:38.307515   50980 fix.go:56] duration metric: took 1m31.577269193s for fixHost
	I0927 18:17:38.307541   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.311388   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.311839   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.311870   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.312083   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.312268   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.312486   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.312689   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.312916   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:17:38.313071   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:17:38.313081   50980 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:17:38.411421   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727461058.385615551
	
	I0927 18:17:38.411445   50980 fix.go:216] guest clock: 1727461058.385615551
	I0927 18:17:38.411472   50980 fix.go:229] Guest: 2024-09-27 18:17:38.385615551 +0000 UTC Remote: 2024-09-27 18:17:38.30752402 +0000 UTC m=+91.709895056 (delta=78.091531ms)
	I0927 18:17:38.411505   50980 fix.go:200] guest clock delta is within tolerance: 78.091531ms
	I0927 18:17:38.411515   50980 start.go:83] releasing machines lock for "multinode-922780", held for 1m31.681284736s
	I0927 18:17:38.411542   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.411804   50980 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:17:38.414758   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.415194   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.415224   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.415409   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.416035   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.416265   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.416334   50980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:17:38.416382   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.416485   50980 ssh_runner.go:195] Run: cat /version.json
	I0927 18:17:38.416510   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.419410   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.419771   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.419801   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.419821   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.419921   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.420041   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.420063   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.420067   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.420246   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.420261   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.420435   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.420498   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:17:38.420612   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.420765   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:17:38.495158   50980 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0927 18:17:38.495508   50980 ssh_runner.go:195] Run: systemctl --version
	I0927 18:17:38.531856   50980 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0927 18:17:38.531919   50980 command_runner.go:130] > systemd 252 (252)
	I0927 18:17:38.531937   50980 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0927 18:17:38.531991   50980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:17:38.694324   50980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 18:17:38.700130   50980 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0927 18:17:38.700183   50980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:17:38.700256   50980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:17:38.709411   50980 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 18:17:38.709438   50980 start.go:495] detecting cgroup driver to use...
	I0927 18:17:38.709521   50980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:17:38.725808   50980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:17:38.740387   50980 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:17:38.740577   50980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:17:38.756266   50980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:17:38.770329   50980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:17:38.914137   50980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:17:39.055612   50980 docker.go:233] disabling docker service ...
	I0927 18:17:39.055706   50980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:17:39.072235   50980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:17:39.086069   50980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:17:39.224632   50980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:17:39.370567   50980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:17:39.385802   50980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:17:39.404644   50980 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0927 18:17:39.405093   50980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 18:17:39.405147   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.415352   50980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:17:39.415432   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.425709   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.435787   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.447369   50980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:17:39.459302   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.470166   50980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.481321   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.491285   50980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:17:39.500279   50980 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0927 18:17:39.500371   50980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:17:39.509374   50980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:17:39.649257   50980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:17:39.853025   50980 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:17:39.853108   50980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:17:39.858522   50980 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0927 18:17:39.858546   50980 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0927 18:17:39.858555   50980 command_runner.go:130] > Device: 0,22	Inode: 1310        Links: 1
	I0927 18:17:39.858563   50980 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 18:17:39.858568   50980 command_runner.go:130] > Access: 2024-09-27 18:17:39.712296829 +0000
	I0927 18:17:39.858575   50980 command_runner.go:130] > Modify: 2024-09-27 18:17:39.712296829 +0000
	I0927 18:17:39.858580   50980 command_runner.go:130] > Change: 2024-09-27 18:17:39.712296829 +0000
	I0927 18:17:39.858583   50980 command_runner.go:130] >  Birth: -
	I0927 18:17:39.858610   50980 start.go:563] Will wait 60s for crictl version
	I0927 18:17:39.858680   50980 ssh_runner.go:195] Run: which crictl
	I0927 18:17:39.862280   50980 command_runner.go:130] > /usr/bin/crictl
	I0927 18:17:39.862427   50980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:17:39.909400   50980 command_runner.go:130] > Version:  0.1.0
	I0927 18:17:39.909423   50980 command_runner.go:130] > RuntimeName:  cri-o
	I0927 18:17:39.909428   50980 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0927 18:17:39.909433   50980 command_runner.go:130] > RuntimeApiVersion:  v1
	I0927 18:17:39.910883   50980 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:17:39.910966   50980 ssh_runner.go:195] Run: crio --version
	I0927 18:17:39.944459   50980 command_runner.go:130] > crio version 1.29.1
	I0927 18:17:39.944484   50980 command_runner.go:130] > Version:        1.29.1
	I0927 18:17:39.944490   50980 command_runner.go:130] > GitCommit:      unknown
	I0927 18:17:39.944494   50980 command_runner.go:130] > GitCommitDate:  unknown
	I0927 18:17:39.944498   50980 command_runner.go:130] > GitTreeState:   clean
	I0927 18:17:39.944504   50980 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 18:17:39.944508   50980 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 18:17:39.944512   50980 command_runner.go:130] > Compiler:       gc
	I0927 18:17:39.944519   50980 command_runner.go:130] > Platform:       linux/amd64
	I0927 18:17:39.944523   50980 command_runner.go:130] > Linkmode:       dynamic
	I0927 18:17:39.944536   50980 command_runner.go:130] > BuildTags:      
	I0927 18:17:39.944542   50980 command_runner.go:130] >   containers_image_ostree_stub
	I0927 18:17:39.944548   50980 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 18:17:39.944557   50980 command_runner.go:130] >   btrfs_noversion
	I0927 18:17:39.944564   50980 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 18:17:39.944572   50980 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 18:17:39.944578   50980 command_runner.go:130] >   seccomp
	I0927 18:17:39.944587   50980 command_runner.go:130] > LDFlags:          unknown
	I0927 18:17:39.944594   50980 command_runner.go:130] > SeccompEnabled:   true
	I0927 18:17:39.944612   50980 command_runner.go:130] > AppArmorEnabled:  false
	I0927 18:17:39.944688   50980 ssh_runner.go:195] Run: crio --version
	I0927 18:17:39.977122   50980 command_runner.go:130] > crio version 1.29.1
	I0927 18:17:39.977148   50980 command_runner.go:130] > Version:        1.29.1
	I0927 18:17:39.977156   50980 command_runner.go:130] > GitCommit:      unknown
	I0927 18:17:39.977161   50980 command_runner.go:130] > GitCommitDate:  unknown
	I0927 18:17:39.977165   50980 command_runner.go:130] > GitTreeState:   clean
	I0927 18:17:39.977171   50980 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 18:17:39.977174   50980 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 18:17:39.977178   50980 command_runner.go:130] > Compiler:       gc
	I0927 18:17:39.977183   50980 command_runner.go:130] > Platform:       linux/amd64
	I0927 18:17:39.977188   50980 command_runner.go:130] > Linkmode:       dynamic
	I0927 18:17:39.977192   50980 command_runner.go:130] > BuildTags:      
	I0927 18:17:39.977196   50980 command_runner.go:130] >   containers_image_ostree_stub
	I0927 18:17:39.977206   50980 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 18:17:39.977212   50980 command_runner.go:130] >   btrfs_noversion
	I0927 18:17:39.977218   50980 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 18:17:39.977223   50980 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 18:17:39.977228   50980 command_runner.go:130] >   seccomp
	I0927 18:17:39.977234   50980 command_runner.go:130] > LDFlags:          unknown
	I0927 18:17:39.977240   50980 command_runner.go:130] > SeccompEnabled:   true
	I0927 18:17:39.977245   50980 command_runner.go:130] > AppArmorEnabled:  false
	I0927 18:17:39.981009   50980 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 18:17:39.982239   50980 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:17:39.985456   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:39.985864   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:39.985889   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:39.986081   50980 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 18:17:39.990207   50980 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0927 18:17:39.990315   50980 kubeadm.go:883] updating cluster {Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:17:39.990461   50980 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:17:39.990526   50980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:17:40.029470   50980 command_runner.go:130] > {
	I0927 18:17:40.029502   50980 command_runner.go:130] >   "images": [
	I0927 18:17:40.029506   50980 command_runner.go:130] >     {
	I0927 18:17:40.029514   50980 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 18:17:40.029519   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029529   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 18:17:40.029535   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029541   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029554   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 18:17:40.029567   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 18:17:40.029574   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029582   50980 command_runner.go:130] >       "size": "87190579",
	I0927 18:17:40.029590   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029597   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.029607   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.029618   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.029624   50980 command_runner.go:130] >     },
	I0927 18:17:40.029628   50980 command_runner.go:130] >     {
	I0927 18:17:40.029634   50980 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 18:17:40.029640   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029645   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 18:17:40.029651   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029657   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029670   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 18:17:40.029685   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 18:17:40.029694   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029702   50980 command_runner.go:130] >       "size": "1363676",
	I0927 18:17:40.029709   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029716   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.029722   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.029726   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.029732   50980 command_runner.go:130] >     },
	I0927 18:17:40.029735   50980 command_runner.go:130] >     {
	I0927 18:17:40.029743   50980 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 18:17:40.029749   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029761   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 18:17:40.029769   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029776   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029792   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 18:17:40.029806   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 18:17:40.029814   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029819   50980 command_runner.go:130] >       "size": "31470524",
	I0927 18:17:40.029825   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029829   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.029835   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.029840   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.029845   50980 command_runner.go:130] >     },
	I0927 18:17:40.029849   50980 command_runner.go:130] >     {
	I0927 18:17:40.029869   50980 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 18:17:40.029881   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029889   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 18:17:40.029898   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029905   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029919   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 18:17:40.029941   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 18:17:40.029948   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029963   50980 command_runner.go:130] >       "size": "63273227",
	I0927 18:17:40.029975   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029984   50980 command_runner.go:130] >       "username": "nonroot",
	I0927 18:17:40.029990   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030000   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030009   50980 command_runner.go:130] >     },
	I0927 18:17:40.030017   50980 command_runner.go:130] >     {
	I0927 18:17:40.030055   50980 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 18:17:40.030098   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030111   50980 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 18:17:40.030121   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030130   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030145   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 18:17:40.030159   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 18:17:40.030168   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030177   50980 command_runner.go:130] >       "size": "149009664",
	I0927 18:17:40.030185   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030193   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030202   50980 command_runner.go:130] >       },
	I0927 18:17:40.030208   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030218   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030228   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030237   50980 command_runner.go:130] >     },
	I0927 18:17:40.030245   50980 command_runner.go:130] >     {
	I0927 18:17:40.030256   50980 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 18:17:40.030278   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030289   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 18:17:40.030297   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030307   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030321   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 18:17:40.030336   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 18:17:40.030351   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030360   50980 command_runner.go:130] >       "size": "95237600",
	I0927 18:17:40.030367   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030373   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030381   50980 command_runner.go:130] >       },
	I0927 18:17:40.030392   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030398   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030408   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030416   50980 command_runner.go:130] >     },
	I0927 18:17:40.030422   50980 command_runner.go:130] >     {
	I0927 18:17:40.030432   50980 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 18:17:40.030442   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030454   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 18:17:40.030463   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030468   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030478   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 18:17:40.030494   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 18:17:40.030502   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030510   50980 command_runner.go:130] >       "size": "89437508",
	I0927 18:17:40.030519   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030528   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030537   50980 command_runner.go:130] >       },
	I0927 18:17:40.030546   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030553   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030557   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030565   50980 command_runner.go:130] >     },
	I0927 18:17:40.030574   50980 command_runner.go:130] >     {
	I0927 18:17:40.030595   50980 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 18:17:40.030606   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030617   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 18:17:40.030626   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030635   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030681   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 18:17:40.030696   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 18:17:40.030705   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030712   50980 command_runner.go:130] >       "size": "92733849",
	I0927 18:17:40.030721   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.030729   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030734   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030743   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030748   50980 command_runner.go:130] >     },
	I0927 18:17:40.030754   50980 command_runner.go:130] >     {
	I0927 18:17:40.030764   50980 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 18:17:40.030770   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030778   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 18:17:40.030784   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030792   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030807   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 18:17:40.030818   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 18:17:40.030825   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030835   50980 command_runner.go:130] >       "size": "68420934",
	I0927 18:17:40.030845   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030854   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030862   50980 command_runner.go:130] >       },
	I0927 18:17:40.030871   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030881   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030890   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030896   50980 command_runner.go:130] >     },
	I0927 18:17:40.030900   50980 command_runner.go:130] >     {
	I0927 18:17:40.030910   50980 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 18:17:40.030926   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030936   50980 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 18:17:40.030944   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030954   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030968   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 18:17:40.030981   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 18:17:40.030987   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030992   50980 command_runner.go:130] >       "size": "742080",
	I0927 18:17:40.031001   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.031011   50980 command_runner.go:130] >         "value": "65535"
	I0927 18:17:40.031017   50980 command_runner.go:130] >       },
	I0927 18:17:40.031027   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.031036   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.031046   50980 command_runner.go:130] >       "pinned": true
	I0927 18:17:40.031055   50980 command_runner.go:130] >     }
	I0927 18:17:40.031062   50980 command_runner.go:130] >   ]
	I0927 18:17:40.031071   50980 command_runner.go:130] > }
	I0927 18:17:40.031260   50980 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:17:40.031273   50980 crio.go:433] Images already preloaded, skipping extraction
	I0927 18:17:40.031319   50980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:17:40.064679   50980 command_runner.go:130] > {
	I0927 18:17:40.064717   50980 command_runner.go:130] >   "images": [
	I0927 18:17:40.064724   50980 command_runner.go:130] >     {
	I0927 18:17:40.064735   50980 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 18:17:40.064741   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.064753   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 18:17:40.064758   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064764   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.064778   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 18:17:40.064793   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 18:17:40.064799   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064804   50980 command_runner.go:130] >       "size": "87190579",
	I0927 18:17:40.064809   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.064813   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.064831   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.064841   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.064846   50980 command_runner.go:130] >     },
	I0927 18:17:40.064851   50980 command_runner.go:130] >     {
	I0927 18:17:40.064860   50980 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 18:17:40.064866   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.064874   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 18:17:40.064880   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064888   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.064900   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 18:17:40.064913   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 18:17:40.064922   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064928   50980 command_runner.go:130] >       "size": "1363676",
	I0927 18:17:40.064935   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.064946   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.064954   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.064965   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.064974   50980 command_runner.go:130] >     },
	I0927 18:17:40.064980   50980 command_runner.go:130] >     {
	I0927 18:17:40.064991   50980 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 18:17:40.065000   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065010   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 18:17:40.065018   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065026   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065041   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 18:17:40.065057   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 18:17:40.065072   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065083   50980 command_runner.go:130] >       "size": "31470524",
	I0927 18:17:40.065091   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.065100   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065107   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065117   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065124   50980 command_runner.go:130] >     },
	I0927 18:17:40.065132   50980 command_runner.go:130] >     {
	I0927 18:17:40.065143   50980 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 18:17:40.065151   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065158   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 18:17:40.065165   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065174   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065186   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 18:17:40.065205   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 18:17:40.065213   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065222   50980 command_runner.go:130] >       "size": "63273227",
	I0927 18:17:40.065237   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.065280   50980 command_runner.go:130] >       "username": "nonroot",
	I0927 18:17:40.065294   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065300   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065305   50980 command_runner.go:130] >     },
	I0927 18:17:40.065312   50980 command_runner.go:130] >     {
	I0927 18:17:40.065324   50980 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 18:17:40.065335   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065344   50980 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 18:17:40.065352   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065359   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065373   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 18:17:40.065387   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 18:17:40.065396   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065403   50980 command_runner.go:130] >       "size": "149009664",
	I0927 18:17:40.065410   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.065419   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.065426   50980 command_runner.go:130] >       },
	I0927 18:17:40.065436   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065443   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065456   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065462   50980 command_runner.go:130] >     },
	I0927 18:17:40.065469   50980 command_runner.go:130] >     {
	I0927 18:17:40.065480   50980 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 18:17:40.065489   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065500   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 18:17:40.065505   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065512   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065528   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 18:17:40.065543   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 18:17:40.065551   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065559   50980 command_runner.go:130] >       "size": "95237600",
	I0927 18:17:40.065569   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.065577   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.065584   50980 command_runner.go:130] >       },
	I0927 18:17:40.065591   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065600   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065607   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065619   50980 command_runner.go:130] >     },
	I0927 18:17:40.065628   50980 command_runner.go:130] >     {
	I0927 18:17:40.065639   50980 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 18:17:40.065648   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065659   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 18:17:40.065668   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065676   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065692   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 18:17:40.065706   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 18:17:40.065718   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065729   50980 command_runner.go:130] >       "size": "89437508",
	I0927 18:17:40.065738   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.065746   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.065753   50980 command_runner.go:130] >       },
	I0927 18:17:40.065761   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065770   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065777   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065786   50980 command_runner.go:130] >     },
	I0927 18:17:40.065792   50980 command_runner.go:130] >     {
	I0927 18:17:40.065806   50980 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 18:17:40.065815   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065825   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 18:17:40.065833   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065840   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065869   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 18:17:40.065884   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 18:17:40.065893   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065901   50980 command_runner.go:130] >       "size": "92733849",
	I0927 18:17:40.065911   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.065920   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065928   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065939   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065947   50980 command_runner.go:130] >     },
	I0927 18:17:40.065953   50980 command_runner.go:130] >     {
	I0927 18:17:40.065966   50980 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 18:17:40.065976   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065986   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 18:17:40.065994   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066001   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.066016   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 18:17:40.066029   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 18:17:40.066037   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066044   50980 command_runner.go:130] >       "size": "68420934",
	I0927 18:17:40.066053   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.066060   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.066069   50980 command_runner.go:130] >       },
	I0927 18:17:40.066076   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.066086   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.066095   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.066103   50980 command_runner.go:130] >     },
	I0927 18:17:40.066110   50980 command_runner.go:130] >     {
	I0927 18:17:40.066123   50980 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 18:17:40.066133   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.066142   50980 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 18:17:40.066150   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066157   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.066187   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 18:17:40.066209   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 18:17:40.066218   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066225   50980 command_runner.go:130] >       "size": "742080",
	I0927 18:17:40.066234   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.066242   50980 command_runner.go:130] >         "value": "65535"
	I0927 18:17:40.066294   50980 command_runner.go:130] >       },
	I0927 18:17:40.066303   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.066309   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.066319   50980 command_runner.go:130] >       "pinned": true
	I0927 18:17:40.066326   50980 command_runner.go:130] >     }
	I0927 18:17:40.066336   50980 command_runner.go:130] >   ]
	I0927 18:17:40.066344   50980 command_runner.go:130] > }
	I0927 18:17:40.066519   50980 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:17:40.066543   50980 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:17:40.066555   50980 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.1 crio true true} ...
	I0927 18:17:40.066705   50980 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-922780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:17:40.066796   50980 ssh_runner.go:195] Run: crio config
	I0927 18:17:40.105427   50980 command_runner.go:130] ! time="2024-09-27 18:17:40.079602450Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0927 18:17:40.111548   50980 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0927 18:17:40.118321   50980 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0927 18:17:40.118350   50980 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0927 18:17:40.118360   50980 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0927 18:17:40.118365   50980 command_runner.go:130] > #
	I0927 18:17:40.118379   50980 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0927 18:17:40.118388   50980 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0927 18:17:40.118397   50980 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0927 18:17:40.118418   50980 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0927 18:17:40.118427   50980 command_runner.go:130] > # reload'.
	I0927 18:17:40.118437   50980 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0927 18:17:40.118448   50980 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0927 18:17:40.118456   50980 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0927 18:17:40.118461   50980 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0927 18:17:40.118473   50980 command_runner.go:130] > [crio]
	I0927 18:17:40.118482   50980 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0927 18:17:40.118487   50980 command_runner.go:130] > # containers images, in this directory.
	I0927 18:17:40.118492   50980 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0927 18:17:40.118505   50980 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0927 18:17:40.118513   50980 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0927 18:17:40.118521   50980 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0927 18:17:40.118525   50980 command_runner.go:130] > # imagestore = ""
	I0927 18:17:40.118533   50980 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0927 18:17:40.118538   50980 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0927 18:17:40.118543   50980 command_runner.go:130] > storage_driver = "overlay"
	I0927 18:17:40.118548   50980 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0927 18:17:40.118553   50980 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0927 18:17:40.118557   50980 command_runner.go:130] > storage_option = [
	I0927 18:17:40.118562   50980 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0927 18:17:40.118567   50980 command_runner.go:130] > ]
	I0927 18:17:40.118574   50980 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0927 18:17:40.118580   50980 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0927 18:17:40.118585   50980 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0927 18:17:40.118589   50980 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0927 18:17:40.118598   50980 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0927 18:17:40.118602   50980 command_runner.go:130] > # always happen on a node reboot
	I0927 18:17:40.118608   50980 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0927 18:17:40.118619   50980 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0927 18:17:40.118627   50980 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0927 18:17:40.118631   50980 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0927 18:17:40.118636   50980 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0927 18:17:40.118662   50980 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0927 18:17:40.118676   50980 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0927 18:17:40.118682   50980 command_runner.go:130] > # internal_wipe = true
	I0927 18:17:40.118689   50980 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0927 18:17:40.118696   50980 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0927 18:17:40.118700   50980 command_runner.go:130] > # internal_repair = false
	I0927 18:17:40.118713   50980 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0927 18:17:40.118721   50980 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0927 18:17:40.118727   50980 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0927 18:17:40.118734   50980 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0927 18:17:40.118742   50980 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0927 18:17:40.118747   50980 command_runner.go:130] > [crio.api]
	I0927 18:17:40.118752   50980 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0927 18:17:40.118759   50980 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0927 18:17:40.118764   50980 command_runner.go:130] > # IP address on which the stream server will listen.
	I0927 18:17:40.118769   50980 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0927 18:17:40.118775   50980 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0927 18:17:40.118782   50980 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0927 18:17:40.118785   50980 command_runner.go:130] > # stream_port = "0"
	I0927 18:17:40.118792   50980 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0927 18:17:40.118796   50980 command_runner.go:130] > # stream_enable_tls = false
	I0927 18:17:40.118803   50980 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0927 18:17:40.118809   50980 command_runner.go:130] > # stream_idle_timeout = ""
	I0927 18:17:40.118815   50980 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0927 18:17:40.118823   50980 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0927 18:17:40.118826   50980 command_runner.go:130] > # minutes.
	I0927 18:17:40.118830   50980 command_runner.go:130] > # stream_tls_cert = ""
	I0927 18:17:40.118835   50980 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0927 18:17:40.118843   50980 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0927 18:17:40.118847   50980 command_runner.go:130] > # stream_tls_key = ""
	I0927 18:17:40.118854   50980 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0927 18:17:40.118860   50980 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0927 18:17:40.118885   50980 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0927 18:17:40.118891   50980 command_runner.go:130] > # stream_tls_ca = ""
	I0927 18:17:40.118898   50980 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 18:17:40.118902   50980 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0927 18:17:40.118909   50980 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 18:17:40.118916   50980 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0927 18:17:40.118922   50980 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0927 18:17:40.118935   50980 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0927 18:17:40.118941   50980 command_runner.go:130] > [crio.runtime]
	I0927 18:17:40.118946   50980 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0927 18:17:40.118953   50980 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0927 18:17:40.118958   50980 command_runner.go:130] > # "nofile=1024:2048"
	I0927 18:17:40.118965   50980 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0927 18:17:40.118970   50980 command_runner.go:130] > # default_ulimits = [
	I0927 18:17:40.118975   50980 command_runner.go:130] > # ]
	I0927 18:17:40.118980   50980 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0927 18:17:40.118985   50980 command_runner.go:130] > # no_pivot = false
	I0927 18:17:40.118993   50980 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0927 18:17:40.119001   50980 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0927 18:17:40.119005   50980 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0927 18:17:40.119011   50980 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0927 18:17:40.119016   50980 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0927 18:17:40.119022   50980 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 18:17:40.119027   50980 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0927 18:17:40.119032   50980 command_runner.go:130] > # Cgroup setting for conmon
	I0927 18:17:40.119040   50980 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0927 18:17:40.119044   50980 command_runner.go:130] > conmon_cgroup = "pod"
	I0927 18:17:40.119052   50980 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0927 18:17:40.119057   50980 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0927 18:17:40.119065   50980 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 18:17:40.119069   50980 command_runner.go:130] > conmon_env = [
	I0927 18:17:40.119077   50980 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 18:17:40.119080   50980 command_runner.go:130] > ]
	I0927 18:17:40.119086   50980 command_runner.go:130] > # Additional environment variables to set for all the
	I0927 18:17:40.119091   50980 command_runner.go:130] > # containers. These are overridden if set in the
	I0927 18:17:40.119099   50980 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0927 18:17:40.119102   50980 command_runner.go:130] > # default_env = [
	I0927 18:17:40.119108   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119113   50980 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0927 18:17:40.119119   50980 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0927 18:17:40.119130   50980 command_runner.go:130] > # selinux = false
	I0927 18:17:40.119136   50980 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0927 18:17:40.119143   50980 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0927 18:17:40.119148   50980 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0927 18:17:40.119154   50980 command_runner.go:130] > # seccomp_profile = ""
	I0927 18:17:40.119159   50980 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0927 18:17:40.119165   50980 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0927 18:17:40.119171   50980 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0927 18:17:40.119177   50980 command_runner.go:130] > # which might increase security.
	I0927 18:17:40.119184   50980 command_runner.go:130] > # This option is currently deprecated,
	I0927 18:17:40.119192   50980 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0927 18:17:40.119197   50980 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0927 18:17:40.119205   50980 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0927 18:17:40.119210   50980 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0927 18:17:40.119220   50980 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0927 18:17:40.119228   50980 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0927 18:17:40.119233   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.119237   50980 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0927 18:17:40.119242   50980 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0927 18:17:40.119264   50980 command_runner.go:130] > # the cgroup blockio controller.
	I0927 18:17:40.119270   50980 command_runner.go:130] > # blockio_config_file = ""
	I0927 18:17:40.119276   50980 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0927 18:17:40.119282   50980 command_runner.go:130] > # blockio parameters.
	I0927 18:17:40.119286   50980 command_runner.go:130] > # blockio_reload = false
	I0927 18:17:40.119294   50980 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0927 18:17:40.119298   50980 command_runner.go:130] > # irqbalance daemon.
	I0927 18:17:40.119306   50980 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0927 18:17:40.119311   50980 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0927 18:17:40.119320   50980 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0927 18:17:40.119326   50980 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0927 18:17:40.119334   50980 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0927 18:17:40.119340   50980 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0927 18:17:40.119347   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.119356   50980 command_runner.go:130] > # rdt_config_file = ""
	I0927 18:17:40.119363   50980 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0927 18:17:40.119368   50980 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0927 18:17:40.119399   50980 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0927 18:17:40.119405   50980 command_runner.go:130] > # separate_pull_cgroup = ""
	I0927 18:17:40.119411   50980 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0927 18:17:40.119417   50980 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0927 18:17:40.119421   50980 command_runner.go:130] > # will be added.
	I0927 18:17:40.119425   50980 command_runner.go:130] > # default_capabilities = [
	I0927 18:17:40.119430   50980 command_runner.go:130] > # 	"CHOWN",
	I0927 18:17:40.119436   50980 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0927 18:17:40.119440   50980 command_runner.go:130] > # 	"FSETID",
	I0927 18:17:40.119444   50980 command_runner.go:130] > # 	"FOWNER",
	I0927 18:17:40.119447   50980 command_runner.go:130] > # 	"SETGID",
	I0927 18:17:40.119451   50980 command_runner.go:130] > # 	"SETUID",
	I0927 18:17:40.119457   50980 command_runner.go:130] > # 	"SETPCAP",
	I0927 18:17:40.119460   50980 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0927 18:17:40.119464   50980 command_runner.go:130] > # 	"KILL",
	I0927 18:17:40.119467   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119475   50980 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0927 18:17:40.119483   50980 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0927 18:17:40.119487   50980 command_runner.go:130] > # add_inheritable_capabilities = false
	I0927 18:17:40.119496   50980 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0927 18:17:40.119508   50980 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 18:17:40.119512   50980 command_runner.go:130] > default_sysctls = [
	I0927 18:17:40.119516   50980 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0927 18:17:40.119520   50980 command_runner.go:130] > ]
	I0927 18:17:40.119524   50980 command_runner.go:130] > # List of devices on the host that a
	I0927 18:17:40.119530   50980 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0927 18:17:40.119536   50980 command_runner.go:130] > # allowed_devices = [
	I0927 18:17:40.119539   50980 command_runner.go:130] > # 	"/dev/fuse",
	I0927 18:17:40.119542   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119547   50980 command_runner.go:130] > # List of additional devices. specified as
	I0927 18:17:40.119559   50980 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0927 18:17:40.119566   50980 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0927 18:17:40.119571   50980 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 18:17:40.119577   50980 command_runner.go:130] > # additional_devices = [
	I0927 18:17:40.119581   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119586   50980 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0927 18:17:40.119591   50980 command_runner.go:130] > # cdi_spec_dirs = [
	I0927 18:17:40.119595   50980 command_runner.go:130] > # 	"/etc/cdi",
	I0927 18:17:40.119599   50980 command_runner.go:130] > # 	"/var/run/cdi",
	I0927 18:17:40.119602   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119608   50980 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0927 18:17:40.119616   50980 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0927 18:17:40.119619   50980 command_runner.go:130] > # Defaults to false.
	I0927 18:17:40.119624   50980 command_runner.go:130] > # device_ownership_from_security_context = false
	I0927 18:17:40.119632   50980 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0927 18:17:40.119638   50980 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0927 18:17:40.119644   50980 command_runner.go:130] > # hooks_dir = [
	I0927 18:17:40.119648   50980 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0927 18:17:40.119651   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119657   50980 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0927 18:17:40.119666   50980 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0927 18:17:40.119671   50980 command_runner.go:130] > # its default mounts from the following two files:
	I0927 18:17:40.119676   50980 command_runner.go:130] > #
	I0927 18:17:40.119682   50980 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0927 18:17:40.119690   50980 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0927 18:17:40.119696   50980 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0927 18:17:40.119701   50980 command_runner.go:130] > #
	I0927 18:17:40.119706   50980 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0927 18:17:40.119721   50980 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0927 18:17:40.119729   50980 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0927 18:17:40.119736   50980 command_runner.go:130] > #      only add mounts it finds in this file.
	I0927 18:17:40.119740   50980 command_runner.go:130] > #
	I0927 18:17:40.119744   50980 command_runner.go:130] > # default_mounts_file = ""
	I0927 18:17:40.119754   50980 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0927 18:17:40.119760   50980 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0927 18:17:40.119764   50980 command_runner.go:130] > pids_limit = 1024
	I0927 18:17:40.119769   50980 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0927 18:17:40.119775   50980 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0927 18:17:40.119780   50980 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0927 18:17:40.119788   50980 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0927 18:17:40.119791   50980 command_runner.go:130] > # log_size_max = -1
	I0927 18:17:40.119797   50980 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0927 18:17:40.119801   50980 command_runner.go:130] > # log_to_journald = false
	I0927 18:17:40.119807   50980 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0927 18:17:40.119811   50980 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0927 18:17:40.119816   50980 command_runner.go:130] > # Path to directory for container attach sockets.
	I0927 18:17:40.119824   50980 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0927 18:17:40.119829   50980 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0927 18:17:40.119834   50980 command_runner.go:130] > # bind_mount_prefix = ""
	I0927 18:17:40.119839   50980 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0927 18:17:40.119846   50980 command_runner.go:130] > # read_only = false
	I0927 18:17:40.119851   50980 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0927 18:17:40.119859   50980 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0927 18:17:40.119863   50980 command_runner.go:130] > # live configuration reload.
	I0927 18:17:40.119868   50980 command_runner.go:130] > # log_level = "info"
	I0927 18:17:40.119874   50980 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0927 18:17:40.119881   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.119885   50980 command_runner.go:130] > # log_filter = ""
	I0927 18:17:40.119891   50980 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0927 18:17:40.119900   50980 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0927 18:17:40.119903   50980 command_runner.go:130] > # separated by comma.
	I0927 18:17:40.119910   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.119916   50980 command_runner.go:130] > # uid_mappings = ""
	I0927 18:17:40.119921   50980 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0927 18:17:40.119927   50980 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0927 18:17:40.119933   50980 command_runner.go:130] > # separated by comma.
	I0927 18:17:40.119952   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.119960   50980 command_runner.go:130] > # gid_mappings = ""
	I0927 18:17:40.119966   50980 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0927 18:17:40.119974   50980 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 18:17:40.119983   50980 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 18:17:40.119993   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.119997   50980 command_runner.go:130] > # minimum_mappable_uid = -1
	I0927 18:17:40.120003   50980 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0927 18:17:40.120008   50980 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 18:17:40.120015   50980 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 18:17:40.120022   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.120029   50980 command_runner.go:130] > # minimum_mappable_gid = -1
	I0927 18:17:40.120034   50980 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0927 18:17:40.120041   50980 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0927 18:17:40.120046   50980 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0927 18:17:40.120052   50980 command_runner.go:130] > # ctr_stop_timeout = 30
	I0927 18:17:40.120057   50980 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0927 18:17:40.120064   50980 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0927 18:17:40.120069   50980 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0927 18:17:40.120076   50980 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0927 18:17:40.120080   50980 command_runner.go:130] > drop_infra_ctr = false
	I0927 18:17:40.120086   50980 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0927 18:17:40.120093   50980 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0927 18:17:40.120100   50980 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0927 18:17:40.120106   50980 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0927 18:17:40.120112   50980 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0927 18:17:40.120119   50980 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0927 18:17:40.120124   50980 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0927 18:17:40.120131   50980 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0927 18:17:40.120135   50980 command_runner.go:130] > # shared_cpuset = ""
	I0927 18:17:40.120140   50980 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0927 18:17:40.120145   50980 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0927 18:17:40.120150   50980 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0927 18:17:40.120162   50980 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0927 18:17:40.120168   50980 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0927 18:17:40.120174   50980 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0927 18:17:40.120185   50980 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0927 18:17:40.120191   50980 command_runner.go:130] > # enable_criu_support = false
	I0927 18:17:40.120196   50980 command_runner.go:130] > # Enable/disable the generation of the container,
	I0927 18:17:40.120202   50980 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0927 18:17:40.120207   50980 command_runner.go:130] > # enable_pod_events = false
	I0927 18:17:40.120213   50980 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 18:17:40.120221   50980 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 18:17:40.120228   50980 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0927 18:17:40.120233   50980 command_runner.go:130] > # default_runtime = "runc"
	I0927 18:17:40.120238   50980 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0927 18:17:40.120247   50980 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0927 18:17:40.120267   50980 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0927 18:17:40.120274   50980 command_runner.go:130] > # creation as a file is not desired either.
	I0927 18:17:40.120282   50980 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0927 18:17:40.120287   50980 command_runner.go:130] > # the hostname is being managed dynamically.
	I0927 18:17:40.120292   50980 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0927 18:17:40.120295   50980 command_runner.go:130] > # ]
	I0927 18:17:40.120301   50980 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0927 18:17:40.120309   50980 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0927 18:17:40.120314   50980 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0927 18:17:40.120319   50980 command_runner.go:130] > # Each entry in the table should follow the format:
	I0927 18:17:40.120324   50980 command_runner.go:130] > #
	I0927 18:17:40.120329   50980 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0927 18:17:40.120335   50980 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0927 18:17:40.120381   50980 command_runner.go:130] > # runtime_type = "oci"
	I0927 18:17:40.120388   50980 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0927 18:17:40.120393   50980 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0927 18:17:40.120397   50980 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0927 18:17:40.120401   50980 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0927 18:17:40.120404   50980 command_runner.go:130] > # monitor_env = []
	I0927 18:17:40.120414   50980 command_runner.go:130] > # privileged_without_host_devices = false
	I0927 18:17:40.120419   50980 command_runner.go:130] > # allowed_annotations = []
	I0927 18:17:40.120425   50980 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0927 18:17:40.120430   50980 command_runner.go:130] > # Where:
	I0927 18:17:40.120435   50980 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0927 18:17:40.120441   50980 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0927 18:17:40.120449   50980 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0927 18:17:40.120456   50980 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0927 18:17:40.120463   50980 command_runner.go:130] > #   in $PATH.
	I0927 18:17:40.120469   50980 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0927 18:17:40.120476   50980 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0927 18:17:40.120482   50980 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0927 18:17:40.120488   50980 command_runner.go:130] > #   state.
	I0927 18:17:40.120494   50980 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0927 18:17:40.120502   50980 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0927 18:17:40.120510   50980 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0927 18:17:40.120516   50980 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0927 18:17:40.120523   50980 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0927 18:17:40.120529   50980 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0927 18:17:40.120536   50980 command_runner.go:130] > #   The currently recognized values are:
	I0927 18:17:40.120542   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0927 18:17:40.120551   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0927 18:17:40.120557   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0927 18:17:40.120563   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0927 18:17:40.120571   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0927 18:17:40.120579   50980 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0927 18:17:40.120585   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0927 18:17:40.120593   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0927 18:17:40.120599   50980 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0927 18:17:40.120607   50980 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0927 18:17:40.120611   50980 command_runner.go:130] > #   deprecated option "conmon".
	I0927 18:17:40.120620   50980 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0927 18:17:40.120626   50980 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0927 18:17:40.120639   50980 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0927 18:17:40.120646   50980 command_runner.go:130] > #   should be moved to the container's cgroup
	I0927 18:17:40.120653   50980 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0927 18:17:40.120660   50980 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0927 18:17:40.120666   50980 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0927 18:17:40.120673   50980 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0927 18:17:40.120677   50980 command_runner.go:130] > #
	I0927 18:17:40.120681   50980 command_runner.go:130] > # Using the seccomp notifier feature:
	I0927 18:17:40.120687   50980 command_runner.go:130] > #
	I0927 18:17:40.120694   50980 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0927 18:17:40.120700   50980 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0927 18:17:40.120705   50980 command_runner.go:130] > #
	I0927 18:17:40.120711   50980 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0927 18:17:40.120719   50980 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0927 18:17:40.120721   50980 command_runner.go:130] > #
	I0927 18:17:40.120727   50980 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0927 18:17:40.120733   50980 command_runner.go:130] > # feature.
	I0927 18:17:40.120736   50980 command_runner.go:130] > #
	I0927 18:17:40.120742   50980 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0927 18:17:40.120750   50980 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0927 18:17:40.120756   50980 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0927 18:17:40.120764   50980 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0927 18:17:40.120770   50980 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0927 18:17:40.120775   50980 command_runner.go:130] > #
	I0927 18:17:40.120781   50980 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0927 18:17:40.120786   50980 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0927 18:17:40.120791   50980 command_runner.go:130] > #
	I0927 18:17:40.120797   50980 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0927 18:17:40.120803   50980 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0927 18:17:40.120806   50980 command_runner.go:130] > #
	I0927 18:17:40.120812   50980 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0927 18:17:40.120819   50980 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0927 18:17:40.120822   50980 command_runner.go:130] > # limitation.
	I0927 18:17:40.120833   50980 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0927 18:17:40.120840   50980 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0927 18:17:40.120843   50980 command_runner.go:130] > runtime_type = "oci"
	I0927 18:17:40.120850   50980 command_runner.go:130] > runtime_root = "/run/runc"
	I0927 18:17:40.120854   50980 command_runner.go:130] > runtime_config_path = ""
	I0927 18:17:40.120859   50980 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0927 18:17:40.120866   50980 command_runner.go:130] > monitor_cgroup = "pod"
	I0927 18:17:40.120870   50980 command_runner.go:130] > monitor_exec_cgroup = ""
	I0927 18:17:40.120874   50980 command_runner.go:130] > monitor_env = [
	I0927 18:17:40.120879   50980 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 18:17:40.120884   50980 command_runner.go:130] > ]
	I0927 18:17:40.120888   50980 command_runner.go:130] > privileged_without_host_devices = false
	I0927 18:17:40.120894   50980 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0927 18:17:40.120902   50980 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0927 18:17:40.120908   50980 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0927 18:17:40.120917   50980 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0927 18:17:40.120925   50980 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0927 18:17:40.120933   50980 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0927 18:17:40.120942   50980 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0927 18:17:40.120952   50980 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0927 18:17:40.120957   50980 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0927 18:17:40.120964   50980 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0927 18:17:40.120969   50980 command_runner.go:130] > # Example:
	I0927 18:17:40.120973   50980 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0927 18:17:40.120978   50980 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0927 18:17:40.120985   50980 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0927 18:17:40.120990   50980 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0927 18:17:40.120993   50980 command_runner.go:130] > # cpuset = 0
	I0927 18:17:40.120997   50980 command_runner.go:130] > # cpushares = "0-1"
	I0927 18:17:40.121003   50980 command_runner.go:130] > # Where:
	I0927 18:17:40.121007   50980 command_runner.go:130] > # The workload name is workload-type.
	I0927 18:17:40.121013   50980 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0927 18:17:40.121020   50980 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0927 18:17:40.121030   50980 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0927 18:17:40.121040   50980 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0927 18:17:40.121048   50980 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0927 18:17:40.121052   50980 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0927 18:17:40.121058   50980 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0927 18:17:40.121063   50980 command_runner.go:130] > # Default value is set to true
	I0927 18:17:40.121067   50980 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0927 18:17:40.121074   50980 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0927 18:17:40.121079   50980 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0927 18:17:40.121086   50980 command_runner.go:130] > # Default value is set to 'false'
	I0927 18:17:40.121090   50980 command_runner.go:130] > # disable_hostport_mapping = false
	I0927 18:17:40.121098   50980 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0927 18:17:40.121102   50980 command_runner.go:130] > #
	I0927 18:17:40.121107   50980 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0927 18:17:40.121112   50980 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0927 18:17:40.121117   50980 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0927 18:17:40.121123   50980 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0927 18:17:40.121128   50980 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0927 18:17:40.121134   50980 command_runner.go:130] > [crio.image]
	I0927 18:17:40.121140   50980 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0927 18:17:40.121143   50980 command_runner.go:130] > # default_transport = "docker://"
	I0927 18:17:40.121149   50980 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0927 18:17:40.121155   50980 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0927 18:17:40.121158   50980 command_runner.go:130] > # global_auth_file = ""
	I0927 18:17:40.121162   50980 command_runner.go:130] > # The image used to instantiate infra containers.
	I0927 18:17:40.121167   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.121171   50980 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0927 18:17:40.121177   50980 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0927 18:17:40.121185   50980 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0927 18:17:40.121190   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.121195   50980 command_runner.go:130] > # pause_image_auth_file = ""
	I0927 18:17:40.121201   50980 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0927 18:17:40.121209   50980 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0927 18:17:40.121218   50980 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0927 18:17:40.121226   50980 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0927 18:17:40.121230   50980 command_runner.go:130] > # pause_command = "/pause"
	I0927 18:17:40.121238   50980 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0927 18:17:40.121244   50980 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0927 18:17:40.121253   50980 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0927 18:17:40.121263   50980 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0927 18:17:40.121269   50980 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0927 18:17:40.121277   50980 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0927 18:17:40.121281   50980 command_runner.go:130] > # pinned_images = [
	I0927 18:17:40.121286   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121291   50980 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0927 18:17:40.121299   50980 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0927 18:17:40.121305   50980 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0927 18:17:40.121311   50980 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0927 18:17:40.121316   50980 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0927 18:17:40.121321   50980 command_runner.go:130] > # signature_policy = ""
	I0927 18:17:40.121327   50980 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0927 18:17:40.121335   50980 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0927 18:17:40.121341   50980 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0927 18:17:40.121349   50980 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0927 18:17:40.121357   50980 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0927 18:17:40.121361   50980 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0927 18:17:40.121368   50980 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0927 18:17:40.121374   50980 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0927 18:17:40.121380   50980 command_runner.go:130] > # changing them here.
	I0927 18:17:40.121384   50980 command_runner.go:130] > # insecure_registries = [
	I0927 18:17:40.121387   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121393   50980 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0927 18:17:40.121400   50980 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0927 18:17:40.121404   50980 command_runner.go:130] > # image_volumes = "mkdir"
	I0927 18:17:40.121411   50980 command_runner.go:130] > # Temporary directory to use for storing big files
	I0927 18:17:40.121415   50980 command_runner.go:130] > # big_files_temporary_dir = ""
	I0927 18:17:40.121428   50980 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0927 18:17:40.121434   50980 command_runner.go:130] > # CNI plugins.
	I0927 18:17:40.121437   50980 command_runner.go:130] > [crio.network]
	I0927 18:17:40.121443   50980 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0927 18:17:40.121449   50980 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0927 18:17:40.121455   50980 command_runner.go:130] > # cni_default_network = ""
	I0927 18:17:40.121460   50980 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0927 18:17:40.121467   50980 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0927 18:17:40.121471   50980 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0927 18:17:40.121477   50980 command_runner.go:130] > # plugin_dirs = [
	I0927 18:17:40.121481   50980 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0927 18:17:40.121484   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121490   50980 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0927 18:17:40.121496   50980 command_runner.go:130] > [crio.metrics]
	I0927 18:17:40.121501   50980 command_runner.go:130] > # Globally enable or disable metrics support.
	I0927 18:17:40.121507   50980 command_runner.go:130] > enable_metrics = true
	I0927 18:17:40.121511   50980 command_runner.go:130] > # Specify enabled metrics collectors.
	I0927 18:17:40.121530   50980 command_runner.go:130] > # Per default all metrics are enabled.
	I0927 18:17:40.121542   50980 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0927 18:17:40.121548   50980 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0927 18:17:40.121556   50980 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0927 18:17:40.121560   50980 command_runner.go:130] > # metrics_collectors = [
	I0927 18:17:40.121563   50980 command_runner.go:130] > # 	"operations",
	I0927 18:17:40.121568   50980 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0927 18:17:40.121575   50980 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0927 18:17:40.121578   50980 command_runner.go:130] > # 	"operations_errors",
	I0927 18:17:40.121582   50980 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0927 18:17:40.121586   50980 command_runner.go:130] > # 	"image_pulls_by_name",
	I0927 18:17:40.121590   50980 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0927 18:17:40.121598   50980 command_runner.go:130] > # 	"image_pulls_failures",
	I0927 18:17:40.121606   50980 command_runner.go:130] > # 	"image_pulls_successes",
	I0927 18:17:40.121610   50980 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0927 18:17:40.121616   50980 command_runner.go:130] > # 	"image_layer_reuse",
	I0927 18:17:40.121625   50980 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0927 18:17:40.121631   50980 command_runner.go:130] > # 	"containers_oom_total",
	I0927 18:17:40.121635   50980 command_runner.go:130] > # 	"containers_oom",
	I0927 18:17:40.121641   50980 command_runner.go:130] > # 	"processes_defunct",
	I0927 18:17:40.121645   50980 command_runner.go:130] > # 	"operations_total",
	I0927 18:17:40.121649   50980 command_runner.go:130] > # 	"operations_latency_seconds",
	I0927 18:17:40.121653   50980 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0927 18:17:40.121658   50980 command_runner.go:130] > # 	"operations_errors_total",
	I0927 18:17:40.121664   50980 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0927 18:17:40.121669   50980 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0927 18:17:40.121674   50980 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0927 18:17:40.121678   50980 command_runner.go:130] > # 	"image_pulls_success_total",
	I0927 18:17:40.121685   50980 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0927 18:17:40.121689   50980 command_runner.go:130] > # 	"containers_oom_count_total",
	I0927 18:17:40.121693   50980 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0927 18:17:40.121699   50980 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0927 18:17:40.121703   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121708   50980 command_runner.go:130] > # The port on which the metrics server will listen.
	I0927 18:17:40.121714   50980 command_runner.go:130] > # metrics_port = 9090
	I0927 18:17:40.121718   50980 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0927 18:17:40.121724   50980 command_runner.go:130] > # metrics_socket = ""
	I0927 18:17:40.121729   50980 command_runner.go:130] > # The certificate for the secure metrics server.
	I0927 18:17:40.121734   50980 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0927 18:17:40.121741   50980 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0927 18:17:40.121745   50980 command_runner.go:130] > # certificate on any modification event.
	I0927 18:17:40.121749   50980 command_runner.go:130] > # metrics_cert = ""
	I0927 18:17:40.121754   50980 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0927 18:17:40.121761   50980 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0927 18:17:40.121765   50980 command_runner.go:130] > # metrics_key = ""
	I0927 18:17:40.121772   50980 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0927 18:17:40.121775   50980 command_runner.go:130] > [crio.tracing]
	I0927 18:17:40.121781   50980 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0927 18:17:40.121785   50980 command_runner.go:130] > # enable_tracing = false
	I0927 18:17:40.121795   50980 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0927 18:17:40.121802   50980 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0927 18:17:40.121809   50980 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0927 18:17:40.121816   50980 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0927 18:17:40.121820   50980 command_runner.go:130] > # CRI-O NRI configuration.
	I0927 18:17:40.121825   50980 command_runner.go:130] > [crio.nri]
	I0927 18:17:40.121829   50980 command_runner.go:130] > # Globally enable or disable NRI.
	I0927 18:17:40.121833   50980 command_runner.go:130] > # enable_nri = false
	I0927 18:17:40.121839   50980 command_runner.go:130] > # NRI socket to listen on.
	I0927 18:17:40.121847   50980 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0927 18:17:40.121851   50980 command_runner.go:130] > # NRI plugin directory to use.
	I0927 18:17:40.121858   50980 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0927 18:17:40.121862   50980 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0927 18:17:40.121869   50980 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0927 18:17:40.121874   50980 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0927 18:17:40.121880   50980 command_runner.go:130] > # nri_disable_connections = false
	I0927 18:17:40.121885   50980 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0927 18:17:40.121891   50980 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0927 18:17:40.121896   50980 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0927 18:17:40.121903   50980 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0927 18:17:40.121908   50980 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0927 18:17:40.121912   50980 command_runner.go:130] > [crio.stats]
	I0927 18:17:40.121919   50980 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0927 18:17:40.121924   50980 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0927 18:17:40.121931   50980 command_runner.go:130] > # stats_collection_period = 0
	I0927 18:17:40.122051   50980 cni.go:84] Creating CNI manager for ""
	I0927 18:17:40.122061   50980 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 18:17:40.122069   50980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:17:40.122090   50980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-922780 NodeName:multinode-922780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:17:40.122216   50980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-922780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:17:40.122281   50980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:17:40.132358   50980 command_runner.go:130] > kubeadm
	I0927 18:17:40.132376   50980 command_runner.go:130] > kubectl
	I0927 18:17:40.132380   50980 command_runner.go:130] > kubelet
	I0927 18:17:40.132398   50980 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:17:40.132442   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:17:40.142159   50980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0927 18:17:40.160075   50980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:17:40.177146   50980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0927 18:17:40.195057   50980 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0927 18:17:40.198855   50980 command_runner.go:130] > 192.168.39.6	control-plane.minikube.internal
	I0927 18:17:40.198913   50980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:17:40.338989   50980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:17:40.352872   50980 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780 for IP: 192.168.39.6
	I0927 18:17:40.352895   50980 certs.go:194] generating shared ca certs ...
	I0927 18:17:40.352915   50980 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:17:40.353079   50980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:17:40.353132   50980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:17:40.353145   50980 certs.go:256] generating profile certs ...
	I0927 18:17:40.353252   50980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/client.key
	I0927 18:17:40.353359   50980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.key.f36a82d8
	I0927 18:17:40.353411   50980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.key
	I0927 18:17:40.353424   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 18:17:40.353450   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 18:17:40.353472   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 18:17:40.353505   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 18:17:40.353524   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 18:17:40.353545   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 18:17:40.353564   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 18:17:40.353580   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 18:17:40.353679   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:17:40.353723   50980 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:17:40.353737   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:17:40.353770   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:17:40.353799   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:17:40.353833   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:17:40.353885   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:17:40.353925   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.353945   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.353959   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.354724   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:17:40.379370   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:17:40.403431   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:17:40.428263   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:17:40.454122   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 18:17:40.479716   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 18:17:40.503418   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:17:40.531523   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 18:17:40.554759   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:17:40.578996   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:17:40.602757   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:17:40.626392   50980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:17:40.643753   50980 ssh_runner.go:195] Run: openssl version
	I0927 18:17:40.649821   50980 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0927 18:17:40.649888   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:17:40.660387   50980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.664649   50980 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.664681   50980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.664715   50980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.670147   50980 command_runner.go:130] > 51391683
	I0927 18:17:40.670209   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:17:40.679204   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:17:40.695642   50980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.700595   50980 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.700636   50980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.700681   50980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.706517   50980 command_runner.go:130] > 3ec20f2e
	I0927 18:17:40.706601   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:17:40.716110   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:17:40.727290   50980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.731755   50980 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.731792   50980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.731847   50980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.737405   50980 command_runner.go:130] > b5213941
	I0927 18:17:40.737482   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:17:40.747636   50980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:17:40.752077   50980 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:17:40.752104   50980 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0927 18:17:40.752111   50980 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I0927 18:17:40.752117   50980 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 18:17:40.752122   50980 command_runner.go:130] > Access: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752127   50980 command_runner.go:130] > Modify: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752132   50980 command_runner.go:130] > Change: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752136   50980 command_runner.go:130] >  Birth: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752194   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 18:17:40.757695   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.757765   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 18:17:40.763047   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.763114   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 18:17:40.768963   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.769117   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 18:17:40.774592   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.774671   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 18:17:40.779866   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.780090   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 18:17:40.785588   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.785656   50980 kubeadm.go:392] StartCluster: {Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:17:40.785787   50980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:17:40.785836   50980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:17:40.821604   50980 command_runner.go:130] > 4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67
	I0927 18:17:40.821633   50980 command_runner.go:130] > cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671
	I0927 18:17:40.821640   50980 command_runner.go:130] > a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b
	I0927 18:17:40.821649   50980 command_runner.go:130] > d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64
	I0927 18:17:40.821657   50980 command_runner.go:130] > 35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7
	I0927 18:17:40.821664   50980 command_runner.go:130] > 989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3
	I0927 18:17:40.821671   50980 command_runner.go:130] > 22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e
	I0927 18:17:40.821697   50980 command_runner.go:130] > 846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a
	I0927 18:17:40.821725   50980 cri.go:89] found id: "4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67"
	I0927 18:17:40.821735   50980 cri.go:89] found id: "cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671"
	I0927 18:17:40.821741   50980 cri.go:89] found id: "a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b"
	I0927 18:17:40.821747   50980 cri.go:89] found id: "d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64"
	I0927 18:17:40.821754   50980 cri.go:89] found id: "35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7"
	I0927 18:17:40.821760   50980 cri.go:89] found id: "989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3"
	I0927 18:17:40.821767   50980 cri.go:89] found id: "22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e"
	I0927 18:17:40.821772   50980 cri.go:89] found id: "846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a"
	I0927 18:17:40.821779   50980 cri.go:89] found id: ""
	I0927 18:17:40.821830   50980 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.711759893Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-b4wjc,Uid:60ecf5ff-8716-46fa-be17-3a79465fa1bb,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461102032663728,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:17:46.341231681Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-44fmt,Uid:7f5a1e22-2666-4526-a0d4-872a13ed8dd0,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1727461068285410296,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:17:46.341233030Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:56b91b01-4f2e-4e97-a817-d3c1399688a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461068205874531,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]stri
ng{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T18:17:46.341228064Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&PodSandboxMetadata{Name:kube-proxy-5mznw,Uid:95f38a43-a74c-4f6b-ac5b-cb5c172b8586,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1727461068188120806,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:17:46.341222624Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&PodSandboxMetadata{Name:kindnet-998kf,Uid:892f5465-49f4-4449-b924-802785752ddd,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461068159867944,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:17:46.341234272Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-922780,Uid:4d8cad4fffabacc42295bb83553f9862,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461062866705034,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4d8cad4fffabacc42295bb83553f9862,kubernetes.io/config.seen: 2024-09-27T18:17:42.330859489Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&PodSandboxMetadata{Name:kube-controller-mana
ger-multinode-922780,Uid:10b48abf1d5de0be5e6aed878d010028,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461062849844016,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10b48abf1d5de0be5e6aed878d010028,kubernetes.io/config.seen: 2024-09-27T18:17:42.330865475Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-922780,Uid:bb20b0304f404681b37f01465c9749a9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461062847425570,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-9
22780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: bb20b0304f404681b37f01465c9749a9,kubernetes.io/config.seen: 2024-09-27T18:17:42.330864525Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&PodSandboxMetadata{Name:etcd-multinode-922780,Uid:5393ebddc0c3fb1d9bf4c6c50054bcde,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1727461062822534570,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.
io/config.hash: 5393ebddc0c3fb1d9bf4c6c50054bcde,kubernetes.io/config.seen: 2024-09-27T18:17:42.330863197Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-b4wjc,Uid:60ecf5ff-8716-46fa-be17-3a79465fa1bb,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460741229910238,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:12:20.913595550Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-44fmt,Uid:7f5a1e22-2666-4526-a0d4-872a13ed8dd0,Namespace:kube-system,Attempt:0,
},State:SANDBOX_NOTREADY,CreatedAt:1727460690407593677,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:11:30.069946595Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:56b91b01-4f2e-4e97-a817-d3c1399688a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460690387885324,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[stri
ng]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T18:11:30.065924208Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&PodSandboxMetadata{Name:kube-proxy-5mznw,Uid:95f38a43-a74c-4f6b-ac5b-cb5c172b8586,Namespace:kube-syst
em,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460678548117296,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:11:18.237576838Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&PodSandboxMetadata{Name:kindnet-998kf,Uid:892f5465-49f4-4449-b924-802785752ddd,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460678535113626,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,k8s-app: kindnet,pod-tem
plate-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:11:18.222504412Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-922780,Uid:10b48abf1d5de0be5e6aed878d010028,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460667954605005,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 10b48abf1d5de0be5e6aed878d010028,kubernetes.io/config.seen: 2024-09-27T18:11:07.486198953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&P
odSandboxMetadata{Name:etcd-multinode-922780,Uid:5393ebddc0c3fb1d9bf4c6c50054bcde,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460667947635796,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: 5393ebddc0c3fb1d9bf4c6c50054bcde,kubernetes.io/config.seen: 2024-09-27T18:11:07.486172233Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-922780,Uid:bb20b0304f404681b37f01465c9749a9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460667943900929,Labels:map[string]string{component: kube-apiserver,io.kubernetes.co
ntainer.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: bb20b0304f404681b37f01465c9749a9,kubernetes.io/config.seen: 2024-09-27T18:11:07.486197554Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-922780,Uid:4d8cad4fffabacc42295bb83553f9862,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1727460667934430590,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,tier: control-plane,},Annotations:map[
string]string{kubernetes.io/config.hash: 4d8cad4fffabacc42295bb83553f9862,kubernetes.io/config.seen: 2024-09-27T18:11:07.486199846Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4f8c4db0-3463-4a30-b6a3-7c39bf454b51 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.712719219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2daa62e7-1e7b-4a25-b117-c2625d4b5134 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.712797049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2daa62e7-1e7b-4a25-b117-c2625d4b5134 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.713902493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2daa62e7-1e7b-4a25-b117-c2625d4b5134 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.726427364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=060ad2ed-0c37-442f-ae81-5d05598bd998 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.726494905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=060ad2ed-0c37-442f-ae81-5d05598bd998 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.728873920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36672978-7b2b-420b-9758-adec2e8be3d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.729421436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461166729397677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36672978-7b2b-420b-9758-adec2e8be3d0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.729989807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ef110e0-ef9d-46e9-9e94-82a7c7923a7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.730060531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ef110e0-ef9d-46e9-9e94-82a7c7923a7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.730488204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ef110e0-ef9d-46e9-9e94-82a7c7923a7a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.771630093Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d3dad05-96e1-4774-9eb3-70436ed60436 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.771726242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d3dad05-96e1-4774-9eb3-70436ed60436 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.772875747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9346126-f7f1-431a-bc0e-c1d599d64b99 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.773460248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461166773435896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9346126-f7f1-431a-bc0e-c1d599d64b99 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.774006411Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5034b5e-572c-4d20-acf1-6bddfaf6ad8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.774075419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5034b5e-572c-4d20-acf1-6bddfaf6ad8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.774456282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5034b5e-572c-4d20-acf1-6bddfaf6ad8c name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.815512186Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f2e4e4a-f55c-4a28-b54f-b064cdbeaa86 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.815610874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f2e4e4a-f55c-4a28-b54f-b064cdbeaa86 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.816546101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2f83da9-ff29-4a65-a204-e9c1012ab5e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.817000668Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461166816978303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2f83da9-ff29-4a65-a204-e9c1012ab5e8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.817581683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8537af2-971e-4320-a1be-f32b495d05ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.817650916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8537af2-971e-4320-a1be-f32b495d05ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:19:26 multinode-922780 crio[2687]: time="2024-09-27 18:19:26.818034696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8537af2-971e-4320-a1be-f32b495d05ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	165e7a35fbfff       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   fdc267c7df365       busybox-7dff88458-b4wjc
	3184ec5c3cd3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   410d0c9cfe4e3       coredns-7c65d6cfc9-44fmt
	e242d9cd69ad3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   aecfe5cec0880       kube-proxy-5mznw
	23d491ec919d6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   4b6fd3691433a       kindnet-998kf
	7a882adf5cf23       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b647f3457b7d5       storage-provisioner
	afff55be12fce       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   9add222ee9e65       kube-scheduler-multinode-922780
	ada30153748b8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   df7d0191d0e46       kube-apiserver-multinode-922780
	f7ef79385aeb2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   aa64fa5966ff0       kube-controller-manager-multinode-922780
	c77bfcd7006ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   4d03d86bdc3d6       etcd-multinode-922780
	51ed58a33b82c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   cf9b758a85a4f       busybox-7dff88458-b4wjc
	4b4e8ab3f4b6e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      7 minutes ago        Exited              coredns                   0                   61a617a9bfbdb       coredns-7c65d6cfc9-44fmt
	cca4a7c828a0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   f5ea05780e1d0       storage-provisioner
	a965ab4d2b3e0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   c6cb30da32a92       kindnet-998kf
	d085955bc4917       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   fbe5d4d74bd77       kube-proxy-5mznw
	35e86781cf3ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   81ace1646a07b       etcd-multinode-922780
	989cab852d99e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   ce23b4635c818       kube-scheduler-multinode-922780
	22e0a85d544be       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   1701dd96be1c7       kube-apiserver-multinode-922780
	846a04b06f43d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   268366d188b92       kube-controller-manager-multinode-922780
	
	
	==> coredns [3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46871 - 3832 "HINFO IN 3286544022602867680.7406210148567013429. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0101406s
	
	
	==> coredns [4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67] <==
	[INFO] 10.244.1.2:51647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002012227s
	[INFO] 10.244.1.2:39368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114744s
	[INFO] 10.244.1.2:50155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000266853s
	[INFO] 10.244.1.2:33796 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001847074s
	[INFO] 10.244.1.2:45834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082588s
	[INFO] 10.244.1.2:43515 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001466s
	[INFO] 10.244.1.2:37248 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072721s
	[INFO] 10.244.0.3:40017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119609s
	[INFO] 10.244.0.3:50048 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067618s
	[INFO] 10.244.0.3:56012 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051131s
	[INFO] 10.244.0.3:40755 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097414s
	[INFO] 10.244.1.2:51235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018039s
	[INFO] 10.244.1.2:48451 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130222s
	[INFO] 10.244.1.2:33568 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076099s
	[INFO] 10.244.1.2:48476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088972s
	[INFO] 10.244.0.3:41501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085151s
	[INFO] 10.244.0.3:45234 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163836s
	[INFO] 10.244.0.3:39921 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101893s
	[INFO] 10.244.0.3:44887 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068439s
	[INFO] 10.244.1.2:41027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260438s
	[INFO] 10.244.1.2:59660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077846s
	[INFO] 10.244.1.2:56509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076323s
	[INFO] 10.244.1.2:57417 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066676s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-922780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=multinode-922780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T18_11_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:11:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:19:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    multinode-922780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6d57dad0044f45b917aa623008f382
	  System UUID:                5f6d57da-d004-4f45-b917-aa623008f382
	  Boot ID:                    446d1f84-bf62-41a7-94ce-14673a478468
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b4wjc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 coredns-7c65d6cfc9-44fmt                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m9s
	  kube-system                 etcd-multinode-922780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m14s
	  kube-system                 kindnet-998kf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m9s
	  kube-system                 kube-apiserver-multinode-922780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-controller-manager-multinode-922780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-proxy-5mznw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kube-system                 kube-scheduler-multinode-922780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m8s                 kube-proxy       
	  Normal  Starting                 97s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m14s                kubelet          Node multinode-922780 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m14s                kubelet          Node multinode-922780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s                kubelet          Node multinode-922780 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m14s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m10s                node-controller  Node multinode-922780 event: Registered Node multinode-922780 in Controller
	  Normal  NodeReady                7m57s                kubelet          Node multinode-922780 status is now: NodeReady
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node multinode-922780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node multinode-922780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x7 over 105s)  kubelet          Node multinode-922780 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                  node-controller  Node multinode-922780 event: Registered Node multinode-922780 in Controller
	
	
	Name:               multinode-922780-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=multinode-922780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T18_18_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:18:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:19:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:18:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:18:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:18:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:18:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    multinode-922780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f7a4de44d4d4b1dbba10366435f44d4
	  System UUID:                7f7a4de4-4d4d-4b1d-bba1-0366435f44d4
	  Boot ID:                    10978c1b-be8e-468f-9ed6-668d13bef83b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-222pq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kindnet-45qxg              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m29s
	  kube-system                 kube-proxy-bqkzm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m23s                  kube-proxy  
	  Normal  Starting                 55s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet     Node multinode-922780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet     Node multinode-922780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet     Node multinode-922780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m9s                   kubelet     Node multinode-922780-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-922780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-922780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-922780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-922780-m02 status is now: NodeReady
	
	
	Name:               multinode-922780-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=multinode-922780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T18_19_05_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:19:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:19:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:19:24 +0000   Fri, 27 Sep 2024 18:19:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:19:24 +0000   Fri, 27 Sep 2024 18:19:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:19:24 +0000   Fri, 27 Sep 2024 18:19:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:19:24 +0000   Fri, 27 Sep 2024 18:19:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.130
	  Hostname:    multinode-922780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a93c0aa935b4a64b88cd20a4d858e30
	  System UUID:                9a93c0aa-935b-4a64-b88c-d20a4d858e30
	  Boot ID:                    0a37310c-b326-4eda-b315-cd535c1fc67e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-8jsf9       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m34s
	  kube-system                 kube-proxy-p98m2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m39s                  kube-proxy       
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 17s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m34s (x2 over 6m34s)  kubelet          Node multinode-922780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x2 over 6m34s)  kubelet          Node multinode-922780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x2 over 6m34s)  kubelet          Node multinode-922780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s                  kubelet          Node multinode-922780-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m45s (x2 over 5m45s)  kubelet          Node multinode-922780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m45s (x2 over 5m45s)  kubelet          Node multinode-922780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m45s (x2 over 5m45s)  kubelet          Node multinode-922780-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m45s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m25s                  kubelet          Node multinode-922780-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-922780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-922780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-922780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                    node-controller  Node multinode-922780-m03 event: Registered Node multinode-922780-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-922780-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058393] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.195905] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.126465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.265573] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[Sep27 18:11] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.746348] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.063440] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994833] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.074919] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.617860] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.501929] kauditd_printk_skb: 46 callbacks suppressed
	[ +12.243867] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 18:12] kauditd_printk_skb: 14 callbacks suppressed
	[Sep27 18:17] systemd-fstab-generator[2612]: Ignoring "noauto" option for root device
	[  +0.144123] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +0.169577] systemd-fstab-generator[2638]: Ignoring "noauto" option for root device
	[  +0.143091] systemd-fstab-generator[2651]: Ignoring "noauto" option for root device
	[  +0.282156] systemd-fstab-generator[2679]: Ignoring "noauto" option for root device
	[  +0.692620] systemd-fstab-generator[2770]: Ignoring "noauto" option for root device
	[  +1.891157] systemd-fstab-generator[2890]: Ignoring "noauto" option for root device
	[  +6.169390] kauditd_printk_skb: 184 callbacks suppressed
	[  +9.040677] kauditd_printk_skb: 34 callbacks suppressed
	[Sep27 18:18] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[ +19.430119] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7] <==
	{"level":"warn","ts":"2024-09-27T18:12:04.467419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.898885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922780-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-27T18:12:04.467591Z","caller":"traceutil/trace.go:171","msg":"trace[1105620344] range","detail":"{range_begin:/registry/minions/multinode-922780-m02; range_end:; response_count:1; response_revision:476; }","duration":"129.082638ms","start":"2024-09-27T18:12:04.338492Z","end":"2024-09-27T18:12:04.467575Z","steps":["trace[1105620344] 'range keys from in-memory index tree'  (duration: 128.811504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:12:53.805128Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.892673ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11349231815928092018 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-922780-m03.17f92c6bcb6950b8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-922780-m03.17f92c6bcb6950b8\" value_size:646 lease:2125859779073315907 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-27T18:12:53.805458Z","caller":"traceutil/trace.go:171","msg":"trace[2013861784] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"223.911209ms","start":"2024-09-27T18:12:53.581523Z","end":"2024-09-27T18:12:53.805434Z","steps":["trace[2013861784] 'read index received'  (duration: 85.007791ms)","trace[2013861784] 'applied index is now lower than readState.Index'  (duration: 138.902236ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T18:12:53.805522Z","caller":"traceutil/trace.go:171","msg":"trace[105043843] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"230.955725ms","start":"2024-09-27T18:12:53.574544Z","end":"2024-09-27T18:12:53.805500Z","steps":["trace[105043843] 'process raft request'  (duration: 91.972833ms)","trace[105043843] 'compare'  (duration: 137.765138ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T18:12:53.805665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.135473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-922780-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T18:12:53.805708Z","caller":"traceutil/trace.go:171","msg":"trace[616286611] range","detail":"{range_begin:/registry/csinodes/multinode-922780-m03; range_end:; response_count:0; response_revision:573; }","duration":"224.183938ms","start":"2024-09-27T18:12:53.581517Z","end":"2024-09-27T18:12:53.805701Z","steps":["trace[616286611] 'agreement among raft nodes before linearized reading'  (duration: 224.060057ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:12:53.805794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.212518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-922780-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T18:12:53.805833Z","caller":"traceutil/trace.go:171","msg":"trace[2136683949] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-922780-m03; range_end:; response_count:0; response_revision:573; }","duration":"224.25523ms","start":"2024-09-27T18:12:53.581571Z","end":"2024-09-27T18:12:53.805827Z","steps":["trace[2136683949] 'agreement among raft nodes before linearized reading'  (duration: 224.196297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:12:53.805941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.518237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922780-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T18:12:53.805987Z","caller":"traceutil/trace.go:171","msg":"trace[1082279073] range","detail":"{range_begin:/registry/minions/multinode-922780-m03; range_end:; response_count:0; response_revision:573; }","duration":"112.565858ms","start":"2024-09-27T18:12:53.693414Z","end":"2024-09-27T18:12:53.805980Z","steps":["trace[1082279073] 'agreement among raft nodes before linearized reading'  (duration: 112.504532ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T18:13:01.394597Z","caller":"traceutil/trace.go:171","msg":"trace[439980453] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"198.520349ms","start":"2024-09-27T18:13:01.196059Z","end":"2024-09-27T18:13:01.394579Z","steps":["trace[439980453] 'read index received'  (duration: 198.370344ms)","trace[439980453] 'applied index is now lower than readState.Index'  (duration: 149.474µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T18:13:01.395021Z","caller":"traceutil/trace.go:171","msg":"trace[1482407675] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"227.677722ms","start":"2024-09-27T18:13:01.167329Z","end":"2024-09-27T18:13:01.395006Z","steps":["trace[1482407675] 'process raft request'  (duration: 227.149255ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:13:01.395197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.087998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922780-m03\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-27T18:13:01.395581Z","caller":"traceutil/trace.go:171","msg":"trace[475392158] range","detail":"{range_begin:/registry/minions/multinode-922780-m03; range_end:; response_count:1; response_revision:614; }","duration":"199.532631ms","start":"2024-09-27T18:13:01.196037Z","end":"2024-09-27T18:13:01.395569Z","steps":["trace[475392158] 'agreement among raft nodes before linearized reading'  (duration: 199.017369ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T18:16:07.511342Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T18:16:07.511500Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-922780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	{"level":"warn","ts":"2024-09-27T18:16:07.511657Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T18:16:07.511780Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T18:16:07.588757Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T18:16:07.588823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T18:16:07.588891Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6f26d2d338759d80","current-leader-member-id":"6f26d2d338759d80"}
	{"level":"info","ts":"2024-09-27T18:16:07.591874Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:16:07.592112Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:16:07.592192Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-922780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	
	==> etcd [c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984] <==
	{"level":"info","ts":"2024-09-27T18:17:43.424354Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:17:43.424436Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","added-peer-id":"6f26d2d338759d80","added-peer-peer-urls":["https://192.168.39.6:2380"]}
	{"level":"info","ts":"2024-09-27T18:17:43.424603Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:17:43.424923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:17:43.470437Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T18:17:43.472466Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6f26d2d338759d80","initial-advertise-peer-urls":["https://192.168.39.6:2380"],"listen-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:17:43.472512Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:17:43.479167Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:17:43.479276Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:17:45.077747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T18:17:45.077832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T18:17:45.077876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2024-09-27T18:17:45.077891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.077897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.077908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.077916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.085215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:17:45.085526Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:17:45.085229Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:multinode-922780 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:17:45.086011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T18:17:45.086049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T18:17:45.086625Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:17:45.086633Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:17:45.087437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2024-09-27T18:17:45.087988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:19:27 up 8 min,  0 users,  load average: 0.36, 0.35, 0.19
	Linux multinode-922780 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f] <==
	I0927 18:18:39.528870       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:18:49.529049       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:18:49.529223       1 main.go:299] handling current node
	I0927 18:18:49.529255       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:18:49.529274       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:18:49.529460       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:18:49.529489       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:18:59.528615       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:18:59.528884       1 main.go:299] handling current node
	I0927 18:18:59.528942       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:18:59.528972       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:18:59.529258       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:18:59.529307       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:19:09.528585       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:19:09.528659       1 main.go:299] handling current node
	I0927 18:19:09.528676       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:19:09.528684       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:19:09.528800       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:19:09.528806       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.2.0/24] 
	I0927 18:19:19.528707       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:19:19.528769       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:19:19.528954       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:19:19.528963       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.2.0/24] 
	I0927 18:19:19.529055       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:19:19.529079       1 main.go:299] handling current node
	
	
	==> kindnet [a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b] <==
	I0927 18:15:19.920905       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:29.920029       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:29.920116       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:29.920341       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:29.920363       1 main.go:299] handling current node
	I0927 18:15:29.920380       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:29.920385       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:39.926304       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:39.926369       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:39.926506       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:39.926526       1 main.go:299] handling current node
	I0927 18:15:39.926540       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:39.926545       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:49.920236       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:49.920289       1 main.go:299] handling current node
	I0927 18:15:49.920310       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:49.920318       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:49.920495       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:49.920521       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:59.926703       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:59.926851       1 main.go:299] handling current node
	I0927 18:15:59.926883       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:59.926906       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:59.927065       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:59.927091       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e] <==
	I0927 18:11:12.055316       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:11:12.105557       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:11:12.188739       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0927 18:11:12.196038       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.6]
	I0927 18:11:12.197023       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 18:11:12.204336       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 18:11:12.454840       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:11:13.372926       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:11:13.404658       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 18:11:13.416501       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:11:17.906950       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 18:11:18.157376       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 18:12:26.055321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51268: use of closed network connection
	E0927 18:12:26.224033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51296: use of closed network connection
	E0927 18:12:26.414610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51302: use of closed network connection
	E0927 18:12:26.588865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51316: use of closed network connection
	E0927 18:12:26.749487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50030: use of closed network connection
	E0927 18:12:26.911972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50040: use of closed network connection
	E0927 18:12:27.185623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50054: use of closed network connection
	E0927 18:12:27.344610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50068: use of closed network connection
	E0927 18:12:27.510967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50082: use of closed network connection
	E0927 18:12:27.671301       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50098: use of closed network connection
	I0927 18:16:07.510255       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0927 18:16:07.533658       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 18:16:07.540285       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25] <==
	I0927 18:17:46.357925       1 aggregator.go:171] initial CRD sync complete...
	I0927 18:17:46.358085       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 18:17:46.358166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 18:17:46.400267       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 18:17:46.408011       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 18:17:46.408226       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 18:17:46.408257       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 18:17:46.408366       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 18:17:46.408569       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 18:17:46.408674       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 18:17:46.408881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0927 18:17:46.420010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 18:17:46.433521       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 18:17:46.444892       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 18:17:46.445008       1 policy_source.go:224] refreshing policies
	I0927 18:17:46.448417       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 18:17:46.459775       1 cache.go:39] Caches are synced for autoregister controller
	I0927 18:17:47.303702       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 18:17:48.607055       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:17:48.970009       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:17:49.001729       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:17:49.145870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:17:49.152639       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:17:49.875521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 18:17:50.125740       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a] <==
	I0927 18:13:41.739178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:13:41.739327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:42.898781       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-922780-m03\" does not exist"
	I0927 18:13:42.899677       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:13:42.928253       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922780-m03" podCIDRs=["10.244.4.0/24"]
	I0927 18:13:42.928340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:42.928399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:42.928447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:43.262243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:43.590030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:47.558589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:53.039873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:02.043413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:02.043973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:14:02.054663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:02.475205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:42.492022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:14:42.492535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m03"
	I0927 18:14:42.517465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:14:42.554898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.578088ms"
	I0927 18:14:42.555293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.073µs"
	I0927 18:14:47.554296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:47.572811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:47.644610       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:14:57.724215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	
	
	==> kube-controller-manager [f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f] <==
	I0927 18:18:46.677983       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:18:46.687412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.343µs"
	I0927 18:18:46.699818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.453µs"
	I0927 18:18:49.869069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:18:50.704184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.270016ms"
	I0927 18:18:50.704311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="83.288µs"
	I0927 18:18:58.135424       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:19:04.414653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:04.437199       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:04.674533       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:19:04.674634       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.646771       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-922780-m03\" does not exist"
	I0927 18:19:05.646786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:19:05.661647       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922780-m03" podCIDRs=["10.244.2.0/24"]
	I0927 18:19:05.664355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.664463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.667427       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.682303       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:06.001564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:09.978779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:16.019654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:24.006658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:24.007559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:19:24.020124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:24.888010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	
	
	==> kube-proxy [d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:11:18.953574       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:11:18.965058       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0927 18:11:18.965430       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:11:19.000562       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:11:19.000592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:11:19.000615       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:11:19.002792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:11:19.003067       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:11:19.003115       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:11:19.005028       1 config.go:199] "Starting service config controller"
	I0927 18:11:19.005051       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:11:19.005070       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:11:19.005074       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:11:19.005533       1 config.go:328] "Starting node config controller"
	I0927 18:11:19.005561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:11:19.105241       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 18:11:19.105324       1 shared_informer.go:320] Caches are synced for service config
	I0927 18:11:19.105595       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:17:49.007799       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:17:49.023818       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0927 18:17:49.023977       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:17:49.075004       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:17:49.075219       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:17:49.075357       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:17:49.079400       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:17:49.080269       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:17:49.080388       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:17:49.087410       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:17:49.087477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:17:49.088037       1 config.go:328] "Starting node config controller"
	I0927 18:17:49.088058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:17:49.089061       1 config.go:199] "Starting service config controller"
	I0927 18:17:49.089089       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:17:49.187575       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 18:17:49.188782       1 shared_informer.go:320] Caches are synced for node config
	I0927 18:17:49.189972       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3] <==
	E0927 18:11:10.494175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 18:11:10.494271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:10.494343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 18:11:10.494431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:10.494504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.317736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:11.317903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.374986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 18:11:11.375052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.574742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 18:11:11.574796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.594324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:11.594430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.620913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 18:11:11.620965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.723662       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 18:11:11.724004       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 18:11:11.791378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 18:11:11.791426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 18:11:14.282700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 18:16:07.521840       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe] <==
	I0927 18:17:44.089266       1 serving.go:386] Generated self-signed cert in-memory
	W0927 18:17:46.370851       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:17:46.370990       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:17:46.371021       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:17:46.371091       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:17:46.404363       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 18:17:46.406221       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:17:46.411052       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 18:17:46.412299       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 18:17:46.412417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:17:46.412462       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 18:17:46.513233       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 18:17:52 multinode-922780 kubelet[2897]: E0927 18:17:52.413787    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461072413395491,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:17:57 multinode-922780 kubelet[2897]: I0927 18:17:57.289654    2897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 27 18:18:02 multinode-922780 kubelet[2897]: E0927 18:18:02.415927    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461082415477630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:02 multinode-922780 kubelet[2897]: E0927 18:18:02.416378    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461082415477630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:12 multinode-922780 kubelet[2897]: E0927 18:18:12.418341    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461092417789666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:12 multinode-922780 kubelet[2897]: E0927 18:18:12.418380    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461092417789666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:22 multinode-922780 kubelet[2897]: E0927 18:18:22.420404    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461102419792233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:22 multinode-922780 kubelet[2897]: E0927 18:18:22.420729    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461102419792233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:32 multinode-922780 kubelet[2897]: E0927 18:18:32.422865    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461112422530094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:32 multinode-922780 kubelet[2897]: E0927 18:18:32.428202    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461112422530094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:42 multinode-922780 kubelet[2897]: E0927 18:18:42.409832    2897 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 18:18:42 multinode-922780 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 18:18:42 multinode-922780 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 18:18:42 multinode-922780 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 18:18:42 multinode-922780 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 18:18:42 multinode-922780 kubelet[2897]: E0927 18:18:42.430332    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461122429951587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:42 multinode-922780 kubelet[2897]: E0927 18:18:42.430370    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461122429951587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:52 multinode-922780 kubelet[2897]: E0927 18:18:52.433203    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461132432449161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:18:52 multinode-922780 kubelet[2897]: E0927 18:18:52.433270    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461132432449161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:19:02 multinode-922780 kubelet[2897]: E0927 18:19:02.436196    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461142435696018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:19:02 multinode-922780 kubelet[2897]: E0927 18:19:02.436222    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461142435696018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:19:12 multinode-922780 kubelet[2897]: E0927 18:19:12.438801    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461152438477285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:19:12 multinode-922780 kubelet[2897]: E0927 18:19:12.439295    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461152438477285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:19:22 multinode-922780 kubelet[2897]: E0927 18:19:22.442366    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461162441898184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:19:22 multinode-922780 kubelet[2897]: E0927 18:19:22.442409    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461162441898184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 18:19:26.415198   52495 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19712-11184/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-922780 -n multinode-922780
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-922780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (323.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 stop
E0927 18:20:17.001352   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922780 stop: exit status 82 (2m0.472935702s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-922780-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-922780 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 status: (18.762591156s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr: (3.360136072s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-922780 -n multinode-922780
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 logs -n 25: (1.389844258s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780:/home/docker/cp-test_multinode-922780-m02_multinode-922780.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780 sudo cat                                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m02_multinode-922780.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03:/home/docker/cp-test_multinode-922780-m02_multinode-922780-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780-m03 sudo cat                                   | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m02_multinode-922780-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp testdata/cp-test.txt                                                | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4096433933/001/cp-test_multinode-922780-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780:/home/docker/cp-test_multinode-922780-m03_multinode-922780.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780 sudo cat                                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m03_multinode-922780.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02:/home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780-m02 sudo cat                                   | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-922780 node stop m03                                                          | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	| node    | multinode-922780 node start                                                             | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:14 UTC |                     |
	| stop    | -p multinode-922780                                                                     | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:14 UTC |                     |
	| start   | -p multinode-922780                                                                     | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:16 UTC | 27 Sep 24 18:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC |                     |
	| node    | multinode-922780 node delete                                                            | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC | 27 Sep 24 18:19 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-922780 stop                                                                   | multinode-922780 | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:16:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:16:06.633295   50980 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:16:06.633430   50980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:16:06.633439   50980 out.go:358] Setting ErrFile to fd 2...
	I0927 18:16:06.633444   50980 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:16:06.633644   50980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:16:06.634208   50980 out.go:352] Setting JSON to false
	I0927 18:16:06.635199   50980 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7112,"bootTime":1727453855,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:16:06.635289   50980 start.go:139] virtualization: kvm guest
	I0927 18:16:06.638753   50980 out.go:177] * [multinode-922780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:16:06.640279   50980 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:16:06.640275   50980 notify.go:220] Checking for updates...
	I0927 18:16:06.643252   50980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:16:06.644829   50980 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:16:06.647425   50980 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:16:06.648815   50980 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:16:06.650269   50980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:16:06.652207   50980 config.go:182] Loaded profile config "multinode-922780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:16:06.652397   50980 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:16:06.653010   50980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:16:06.653088   50980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:16:06.668512   50980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I0927 18:16:06.669079   50980 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:16:06.669679   50980 main.go:141] libmachine: Using API Version  1
	I0927 18:16:06.669699   50980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:16:06.670032   50980 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:16:06.670276   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:16:06.707706   50980 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:16:06.709291   50980 start.go:297] selected driver: kvm2
	I0927 18:16:06.709306   50980 start.go:901] validating driver "kvm2" against &{Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:16:06.709432   50980 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:16:06.709738   50980 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:16:06.709826   50980 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:16:06.724907   50980 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:16:06.725654   50980 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:16:06.725698   50980 cni.go:84] Creating CNI manager for ""
	I0927 18:16:06.725772   50980 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 18:16:06.725850   50980 start.go:340] cluster config:
	{Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-922780 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:16:06.726016   50980 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:16:06.728226   50980 out.go:177] * Starting "multinode-922780" primary control-plane node in "multinode-922780" cluster
	I0927 18:16:06.729675   50980 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:16:06.729719   50980 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:16:06.729734   50980 cache.go:56] Caching tarball of preloaded images
	I0927 18:16:06.729853   50980 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:16:06.729867   50980 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:16:06.729972   50980 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/config.json ...
	I0927 18:16:06.730175   50980 start.go:360] acquireMachinesLock for multinode-922780: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:16:06.730219   50980 start.go:364] duration metric: took 26.275µs to acquireMachinesLock for "multinode-922780"
	I0927 18:16:06.730237   50980 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:16:06.730245   50980 fix.go:54] fixHost starting: 
	I0927 18:16:06.730500   50980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:16:06.730535   50980 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:16:06.744874   50980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0927 18:16:06.745361   50980 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:16:06.745869   50980 main.go:141] libmachine: Using API Version  1
	I0927 18:16:06.745896   50980 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:16:06.746316   50980 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:16:06.746517   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:16:06.746688   50980 main.go:141] libmachine: (multinode-922780) Calling .GetState
	I0927 18:16:06.748272   50980 fix.go:112] recreateIfNeeded on multinode-922780: state=Running err=<nil>
	W0927 18:16:06.748293   50980 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:16:06.750391   50980 out.go:177] * Updating the running kvm2 "multinode-922780" VM ...
	I0927 18:16:06.751826   50980 machine.go:93] provisionDockerMachine start ...
	I0927 18:16:06.751851   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:16:06.752060   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:06.754951   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.755498   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:06.755523   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.755723   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:06.755928   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.756072   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.756191   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:06.756375   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:06.756592   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:06.756604   50980 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:16:06.859464   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922780
	
	I0927 18:16:06.859522   50980 main.go:141] libmachine: (multinode-922780) Calling .GetMachineName
	I0927 18:16:06.859780   50980 buildroot.go:166] provisioning hostname "multinode-922780"
	I0927 18:16:06.859804   50980 main.go:141] libmachine: (multinode-922780) Calling .GetMachineName
	I0927 18:16:06.859985   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:06.862471   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.862913   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:06.862938   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.863108   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:06.863320   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.863462   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.863616   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:06.863788   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:06.864009   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:06.864024   50980 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-922780 && echo "multinode-922780" | sudo tee /etc/hostname
	I0927 18:16:06.979859   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-922780
	
	I0927 18:16:06.979896   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:06.983501   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.983995   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:06.984033   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:06.984333   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:06.984574   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.984785   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:06.984940   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:06.985113   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:06.985342   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:06.985366   50980 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-922780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-922780/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-922780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:16:07.087973   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:16:07.088001   50980 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:16:07.088038   50980 buildroot.go:174] setting up certificates
	I0927 18:16:07.088048   50980 provision.go:84] configureAuth start
	I0927 18:16:07.088056   50980 main.go:141] libmachine: (multinode-922780) Calling .GetMachineName
	I0927 18:16:07.088379   50980 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:16:07.091802   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.092226   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.092252   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.092559   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:07.095273   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.095692   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.095729   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.095883   50980 provision.go:143] copyHostCerts
	I0927 18:16:07.095910   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:16:07.095954   50980 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:16:07.095967   50980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:16:07.096070   50980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:16:07.096182   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:16:07.096201   50980 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:16:07.096208   50980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:16:07.096237   50980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:16:07.096325   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:16:07.096342   50980 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:16:07.096346   50980 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:16:07.096369   50980 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:16:07.096430   50980 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.multinode-922780 san=[127.0.0.1 192.168.39.6 localhost minikube multinode-922780]
	I0927 18:16:07.226198   50980 provision.go:177] copyRemoteCerts
	I0927 18:16:07.226257   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:16:07.226279   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:07.229395   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.229777   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.229799   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.229979   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:07.230160   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:07.230313   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:07.230472   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:16:07.311548   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0927 18:16:07.311636   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:16:07.336113   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0927 18:16:07.336178   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0927 18:16:07.360468   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0927 18:16:07.360547   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 18:16:07.392888   50980 provision.go:87] duration metric: took 304.829582ms to configureAuth
	I0927 18:16:07.392915   50980 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:16:07.393149   50980 config.go:182] Loaded profile config "multinode-922780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:16:07.393240   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:16:07.396221   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.396661   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:16:07.396692   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:16:07.396918   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:16:07.397118   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:07.397275   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:16:07.397402   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:16:07.397544   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:16:07.397756   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:16:07.397779   50980 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:17:38.184983   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:17:38.185011   50980 machine.go:96] duration metric: took 1m31.43316818s to provisionDockerMachine
	I0927 18:17:38.185059   50980 start.go:293] postStartSetup for "multinode-922780" (driver="kvm2")
	I0927 18:17:38.185075   50980 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:17:38.185101   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.185497   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:17:38.185536   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.189013   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.189709   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.189731   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.190012   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.190216   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.190399   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.190556   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:17:38.269917   50980 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:17:38.273989   50980 command_runner.go:130] > NAME=Buildroot
	I0927 18:17:38.274007   50980 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0927 18:17:38.274011   50980 command_runner.go:130] > ID=buildroot
	I0927 18:17:38.274016   50980 command_runner.go:130] > VERSION_ID=2023.02.9
	I0927 18:17:38.274023   50980 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0927 18:17:38.274058   50980 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:17:38.274071   50980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:17:38.274126   50980 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:17:38.274199   50980 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:17:38.274205   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /etc/ssl/certs/183682.pem
	I0927 18:17:38.274282   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:17:38.283325   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:17:38.307463   50980 start.go:296] duration metric: took 122.386435ms for postStartSetup
	I0927 18:17:38.307515   50980 fix.go:56] duration metric: took 1m31.577269193s for fixHost
	I0927 18:17:38.307541   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.311388   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.311839   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.311870   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.312083   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.312268   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.312486   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.312689   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.312916   50980 main.go:141] libmachine: Using SSH client type: native
	I0927 18:17:38.313071   50980 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I0927 18:17:38.313081   50980 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:17:38.411421   50980 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727461058.385615551
	
	I0927 18:17:38.411445   50980 fix.go:216] guest clock: 1727461058.385615551
	I0927 18:17:38.411472   50980 fix.go:229] Guest: 2024-09-27 18:17:38.385615551 +0000 UTC Remote: 2024-09-27 18:17:38.30752402 +0000 UTC m=+91.709895056 (delta=78.091531ms)
	I0927 18:17:38.411505   50980 fix.go:200] guest clock delta is within tolerance: 78.091531ms
	I0927 18:17:38.411515   50980 start.go:83] releasing machines lock for "multinode-922780", held for 1m31.681284736s
	I0927 18:17:38.411542   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.411804   50980 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:17:38.414758   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.415194   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.415224   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.415409   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.416035   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.416265   50980 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:17:38.416334   50980 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:17:38.416382   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.416485   50980 ssh_runner.go:195] Run: cat /version.json
	I0927 18:17:38.416510   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:17:38.419410   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.419771   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.419801   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.419821   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.419921   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.420041   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:38.420063   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:38.420067   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.420246   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.420261   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:17:38.420435   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:17:38.420498   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:17:38.420612   50980 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:17:38.420765   50980 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:17:38.495158   50980 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I0927 18:17:38.495508   50980 ssh_runner.go:195] Run: systemctl --version
	I0927 18:17:38.531856   50980 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0927 18:17:38.531919   50980 command_runner.go:130] > systemd 252 (252)
	I0927 18:17:38.531937   50980 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0927 18:17:38.531991   50980 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:17:38.694324   50980 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 18:17:38.700130   50980 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0927 18:17:38.700183   50980 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:17:38.700256   50980 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:17:38.709411   50980 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 18:17:38.709438   50980 start.go:495] detecting cgroup driver to use...
	I0927 18:17:38.709521   50980 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:17:38.725808   50980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:17:38.740387   50980 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:17:38.740577   50980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:17:38.756266   50980 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:17:38.770329   50980 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:17:38.914137   50980 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:17:39.055612   50980 docker.go:233] disabling docker service ...
	I0927 18:17:39.055706   50980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:17:39.072235   50980 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:17:39.086069   50980 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:17:39.224632   50980 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:17:39.370567   50980 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:17:39.385802   50980 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:17:39.404644   50980 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0927 18:17:39.405093   50980 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 18:17:39.405147   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.415352   50980 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:17:39.415432   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.425709   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.435787   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.447369   50980 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:17:39.459302   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.470166   50980 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.481321   50980 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:17:39.491285   50980 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:17:39.500279   50980 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0927 18:17:39.500371   50980 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:17:39.509374   50980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:17:39.649257   50980 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:17:39.853025   50980 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:17:39.853108   50980 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:17:39.858522   50980 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0927 18:17:39.858546   50980 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0927 18:17:39.858555   50980 command_runner.go:130] > Device: 0,22	Inode: 1310        Links: 1
	I0927 18:17:39.858563   50980 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 18:17:39.858568   50980 command_runner.go:130] > Access: 2024-09-27 18:17:39.712296829 +0000
	I0927 18:17:39.858575   50980 command_runner.go:130] > Modify: 2024-09-27 18:17:39.712296829 +0000
	I0927 18:17:39.858580   50980 command_runner.go:130] > Change: 2024-09-27 18:17:39.712296829 +0000
	I0927 18:17:39.858583   50980 command_runner.go:130] >  Birth: -
	I0927 18:17:39.858610   50980 start.go:563] Will wait 60s for crictl version
	I0927 18:17:39.858680   50980 ssh_runner.go:195] Run: which crictl
	I0927 18:17:39.862280   50980 command_runner.go:130] > /usr/bin/crictl
	I0927 18:17:39.862427   50980 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:17:39.909400   50980 command_runner.go:130] > Version:  0.1.0
	I0927 18:17:39.909423   50980 command_runner.go:130] > RuntimeName:  cri-o
	I0927 18:17:39.909428   50980 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0927 18:17:39.909433   50980 command_runner.go:130] > RuntimeApiVersion:  v1
	I0927 18:17:39.910883   50980 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:17:39.910966   50980 ssh_runner.go:195] Run: crio --version
	I0927 18:17:39.944459   50980 command_runner.go:130] > crio version 1.29.1
	I0927 18:17:39.944484   50980 command_runner.go:130] > Version:        1.29.1
	I0927 18:17:39.944490   50980 command_runner.go:130] > GitCommit:      unknown
	I0927 18:17:39.944494   50980 command_runner.go:130] > GitCommitDate:  unknown
	I0927 18:17:39.944498   50980 command_runner.go:130] > GitTreeState:   clean
	I0927 18:17:39.944504   50980 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 18:17:39.944508   50980 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 18:17:39.944512   50980 command_runner.go:130] > Compiler:       gc
	I0927 18:17:39.944519   50980 command_runner.go:130] > Platform:       linux/amd64
	I0927 18:17:39.944523   50980 command_runner.go:130] > Linkmode:       dynamic
	I0927 18:17:39.944536   50980 command_runner.go:130] > BuildTags:      
	I0927 18:17:39.944542   50980 command_runner.go:130] >   containers_image_ostree_stub
	I0927 18:17:39.944548   50980 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 18:17:39.944557   50980 command_runner.go:130] >   btrfs_noversion
	I0927 18:17:39.944564   50980 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 18:17:39.944572   50980 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 18:17:39.944578   50980 command_runner.go:130] >   seccomp
	I0927 18:17:39.944587   50980 command_runner.go:130] > LDFlags:          unknown
	I0927 18:17:39.944594   50980 command_runner.go:130] > SeccompEnabled:   true
	I0927 18:17:39.944612   50980 command_runner.go:130] > AppArmorEnabled:  false
	I0927 18:17:39.944688   50980 ssh_runner.go:195] Run: crio --version
	I0927 18:17:39.977122   50980 command_runner.go:130] > crio version 1.29.1
	I0927 18:17:39.977148   50980 command_runner.go:130] > Version:        1.29.1
	I0927 18:17:39.977156   50980 command_runner.go:130] > GitCommit:      unknown
	I0927 18:17:39.977161   50980 command_runner.go:130] > GitCommitDate:  unknown
	I0927 18:17:39.977165   50980 command_runner.go:130] > GitTreeState:   clean
	I0927 18:17:39.977171   50980 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I0927 18:17:39.977174   50980 command_runner.go:130] > GoVersion:      go1.21.6
	I0927 18:17:39.977178   50980 command_runner.go:130] > Compiler:       gc
	I0927 18:17:39.977183   50980 command_runner.go:130] > Platform:       linux/amd64
	I0927 18:17:39.977188   50980 command_runner.go:130] > Linkmode:       dynamic
	I0927 18:17:39.977192   50980 command_runner.go:130] > BuildTags:      
	I0927 18:17:39.977196   50980 command_runner.go:130] >   containers_image_ostree_stub
	I0927 18:17:39.977206   50980 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0927 18:17:39.977212   50980 command_runner.go:130] >   btrfs_noversion
	I0927 18:17:39.977218   50980 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0927 18:17:39.977223   50980 command_runner.go:130] >   libdm_no_deferred_remove
	I0927 18:17:39.977228   50980 command_runner.go:130] >   seccomp
	I0927 18:17:39.977234   50980 command_runner.go:130] > LDFlags:          unknown
	I0927 18:17:39.977240   50980 command_runner.go:130] > SeccompEnabled:   true
	I0927 18:17:39.977245   50980 command_runner.go:130] > AppArmorEnabled:  false
	I0927 18:17:39.981009   50980 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 18:17:39.982239   50980 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:17:39.985456   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:39.985864   50980 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:17:39.985889   50980 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:17:39.986081   50980 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 18:17:39.990207   50980 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0927 18:17:39.990315   50980 kubeadm.go:883] updating cluster {Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:17:39.990461   50980 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:17:39.990526   50980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:17:40.029470   50980 command_runner.go:130] > {
	I0927 18:17:40.029502   50980 command_runner.go:130] >   "images": [
	I0927 18:17:40.029506   50980 command_runner.go:130] >     {
	I0927 18:17:40.029514   50980 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 18:17:40.029519   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029529   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 18:17:40.029535   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029541   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029554   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 18:17:40.029567   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 18:17:40.029574   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029582   50980 command_runner.go:130] >       "size": "87190579",
	I0927 18:17:40.029590   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029597   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.029607   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.029618   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.029624   50980 command_runner.go:130] >     },
	I0927 18:17:40.029628   50980 command_runner.go:130] >     {
	I0927 18:17:40.029634   50980 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 18:17:40.029640   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029645   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 18:17:40.029651   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029657   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029670   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 18:17:40.029685   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 18:17:40.029694   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029702   50980 command_runner.go:130] >       "size": "1363676",
	I0927 18:17:40.029709   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029716   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.029722   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.029726   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.029732   50980 command_runner.go:130] >     },
	I0927 18:17:40.029735   50980 command_runner.go:130] >     {
	I0927 18:17:40.029743   50980 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 18:17:40.029749   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029761   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 18:17:40.029769   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029776   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029792   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 18:17:40.029806   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 18:17:40.029814   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029819   50980 command_runner.go:130] >       "size": "31470524",
	I0927 18:17:40.029825   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029829   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.029835   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.029840   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.029845   50980 command_runner.go:130] >     },
	I0927 18:17:40.029849   50980 command_runner.go:130] >     {
	I0927 18:17:40.029869   50980 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 18:17:40.029881   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.029889   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 18:17:40.029898   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029905   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.029919   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 18:17:40.029941   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 18:17:40.029948   50980 command_runner.go:130] >       ],
	I0927 18:17:40.029963   50980 command_runner.go:130] >       "size": "63273227",
	I0927 18:17:40.029975   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.029984   50980 command_runner.go:130] >       "username": "nonroot",
	I0927 18:17:40.029990   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030000   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030009   50980 command_runner.go:130] >     },
	I0927 18:17:40.030017   50980 command_runner.go:130] >     {
	I0927 18:17:40.030055   50980 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 18:17:40.030098   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030111   50980 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 18:17:40.030121   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030130   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030145   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 18:17:40.030159   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 18:17:40.030168   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030177   50980 command_runner.go:130] >       "size": "149009664",
	I0927 18:17:40.030185   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030193   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030202   50980 command_runner.go:130] >       },
	I0927 18:17:40.030208   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030218   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030228   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030237   50980 command_runner.go:130] >     },
	I0927 18:17:40.030245   50980 command_runner.go:130] >     {
	I0927 18:17:40.030256   50980 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 18:17:40.030278   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030289   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 18:17:40.030297   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030307   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030321   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 18:17:40.030336   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 18:17:40.030351   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030360   50980 command_runner.go:130] >       "size": "95237600",
	I0927 18:17:40.030367   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030373   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030381   50980 command_runner.go:130] >       },
	I0927 18:17:40.030392   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030398   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030408   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030416   50980 command_runner.go:130] >     },
	I0927 18:17:40.030422   50980 command_runner.go:130] >     {
	I0927 18:17:40.030432   50980 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 18:17:40.030442   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030454   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 18:17:40.030463   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030468   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030478   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 18:17:40.030494   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 18:17:40.030502   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030510   50980 command_runner.go:130] >       "size": "89437508",
	I0927 18:17:40.030519   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030528   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030537   50980 command_runner.go:130] >       },
	I0927 18:17:40.030546   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030553   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030557   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030565   50980 command_runner.go:130] >     },
	I0927 18:17:40.030574   50980 command_runner.go:130] >     {
	I0927 18:17:40.030595   50980 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 18:17:40.030606   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030617   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 18:17:40.030626   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030635   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030681   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 18:17:40.030696   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 18:17:40.030705   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030712   50980 command_runner.go:130] >       "size": "92733849",
	I0927 18:17:40.030721   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.030729   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030734   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030743   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030748   50980 command_runner.go:130] >     },
	I0927 18:17:40.030754   50980 command_runner.go:130] >     {
	I0927 18:17:40.030764   50980 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 18:17:40.030770   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030778   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 18:17:40.030784   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030792   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030807   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 18:17:40.030818   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 18:17:40.030825   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030835   50980 command_runner.go:130] >       "size": "68420934",
	I0927 18:17:40.030845   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.030854   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.030862   50980 command_runner.go:130] >       },
	I0927 18:17:40.030871   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.030881   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.030890   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.030896   50980 command_runner.go:130] >     },
	I0927 18:17:40.030900   50980 command_runner.go:130] >     {
	I0927 18:17:40.030910   50980 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 18:17:40.030926   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.030936   50980 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 18:17:40.030944   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030954   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.030968   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 18:17:40.030981   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 18:17:40.030987   50980 command_runner.go:130] >       ],
	I0927 18:17:40.030992   50980 command_runner.go:130] >       "size": "742080",
	I0927 18:17:40.031001   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.031011   50980 command_runner.go:130] >         "value": "65535"
	I0927 18:17:40.031017   50980 command_runner.go:130] >       },
	I0927 18:17:40.031027   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.031036   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.031046   50980 command_runner.go:130] >       "pinned": true
	I0927 18:17:40.031055   50980 command_runner.go:130] >     }
	I0927 18:17:40.031062   50980 command_runner.go:130] >   ]
	I0927 18:17:40.031071   50980 command_runner.go:130] > }
	I0927 18:17:40.031260   50980 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:17:40.031273   50980 crio.go:433] Images already preloaded, skipping extraction
	I0927 18:17:40.031319   50980 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:17:40.064679   50980 command_runner.go:130] > {
	I0927 18:17:40.064717   50980 command_runner.go:130] >   "images": [
	I0927 18:17:40.064724   50980 command_runner.go:130] >     {
	I0927 18:17:40.064735   50980 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0927 18:17:40.064741   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.064753   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0927 18:17:40.064758   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064764   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.064778   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0927 18:17:40.064793   50980 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0927 18:17:40.064799   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064804   50980 command_runner.go:130] >       "size": "87190579",
	I0927 18:17:40.064809   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.064813   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.064831   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.064841   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.064846   50980 command_runner.go:130] >     },
	I0927 18:17:40.064851   50980 command_runner.go:130] >     {
	I0927 18:17:40.064860   50980 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0927 18:17:40.064866   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.064874   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0927 18:17:40.064880   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064888   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.064900   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0927 18:17:40.064913   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0927 18:17:40.064922   50980 command_runner.go:130] >       ],
	I0927 18:17:40.064928   50980 command_runner.go:130] >       "size": "1363676",
	I0927 18:17:40.064935   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.064946   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.064954   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.064965   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.064974   50980 command_runner.go:130] >     },
	I0927 18:17:40.064980   50980 command_runner.go:130] >     {
	I0927 18:17:40.064991   50980 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0927 18:17:40.065000   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065010   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0927 18:17:40.065018   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065026   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065041   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0927 18:17:40.065057   50980 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0927 18:17:40.065072   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065083   50980 command_runner.go:130] >       "size": "31470524",
	I0927 18:17:40.065091   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.065100   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065107   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065117   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065124   50980 command_runner.go:130] >     },
	I0927 18:17:40.065132   50980 command_runner.go:130] >     {
	I0927 18:17:40.065143   50980 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0927 18:17:40.065151   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065158   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0927 18:17:40.065165   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065174   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065186   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0927 18:17:40.065205   50980 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0927 18:17:40.065213   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065222   50980 command_runner.go:130] >       "size": "63273227",
	I0927 18:17:40.065237   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.065280   50980 command_runner.go:130] >       "username": "nonroot",
	I0927 18:17:40.065294   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065300   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065305   50980 command_runner.go:130] >     },
	I0927 18:17:40.065312   50980 command_runner.go:130] >     {
	I0927 18:17:40.065324   50980 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0927 18:17:40.065335   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065344   50980 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0927 18:17:40.065352   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065359   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065373   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0927 18:17:40.065387   50980 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0927 18:17:40.065396   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065403   50980 command_runner.go:130] >       "size": "149009664",
	I0927 18:17:40.065410   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.065419   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.065426   50980 command_runner.go:130] >       },
	I0927 18:17:40.065436   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065443   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065456   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065462   50980 command_runner.go:130] >     },
	I0927 18:17:40.065469   50980 command_runner.go:130] >     {
	I0927 18:17:40.065480   50980 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0927 18:17:40.065489   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065500   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0927 18:17:40.065505   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065512   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065528   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0927 18:17:40.065543   50980 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0927 18:17:40.065551   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065559   50980 command_runner.go:130] >       "size": "95237600",
	I0927 18:17:40.065569   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.065577   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.065584   50980 command_runner.go:130] >       },
	I0927 18:17:40.065591   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065600   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065607   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065619   50980 command_runner.go:130] >     },
	I0927 18:17:40.065628   50980 command_runner.go:130] >     {
	I0927 18:17:40.065639   50980 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0927 18:17:40.065648   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065659   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0927 18:17:40.065668   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065676   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065692   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0927 18:17:40.065706   50980 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0927 18:17:40.065718   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065729   50980 command_runner.go:130] >       "size": "89437508",
	I0927 18:17:40.065738   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.065746   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.065753   50980 command_runner.go:130] >       },
	I0927 18:17:40.065761   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065770   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065777   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065786   50980 command_runner.go:130] >     },
	I0927 18:17:40.065792   50980 command_runner.go:130] >     {
	I0927 18:17:40.065806   50980 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0927 18:17:40.065815   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065825   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0927 18:17:40.065833   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065840   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.065869   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0927 18:17:40.065884   50980 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0927 18:17:40.065893   50980 command_runner.go:130] >       ],
	I0927 18:17:40.065901   50980 command_runner.go:130] >       "size": "92733849",
	I0927 18:17:40.065911   50980 command_runner.go:130] >       "uid": null,
	I0927 18:17:40.065920   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.065928   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.065939   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.065947   50980 command_runner.go:130] >     },
	I0927 18:17:40.065953   50980 command_runner.go:130] >     {
	I0927 18:17:40.065966   50980 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0927 18:17:40.065976   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.065986   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0927 18:17:40.065994   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066001   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.066016   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0927 18:17:40.066029   50980 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0927 18:17:40.066037   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066044   50980 command_runner.go:130] >       "size": "68420934",
	I0927 18:17:40.066053   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.066060   50980 command_runner.go:130] >         "value": "0"
	I0927 18:17:40.066069   50980 command_runner.go:130] >       },
	I0927 18:17:40.066076   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.066086   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.066095   50980 command_runner.go:130] >       "pinned": false
	I0927 18:17:40.066103   50980 command_runner.go:130] >     },
	I0927 18:17:40.066110   50980 command_runner.go:130] >     {
	I0927 18:17:40.066123   50980 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0927 18:17:40.066133   50980 command_runner.go:130] >       "repoTags": [
	I0927 18:17:40.066142   50980 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0927 18:17:40.066150   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066157   50980 command_runner.go:130] >       "repoDigests": [
	I0927 18:17:40.066187   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0927 18:17:40.066209   50980 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0927 18:17:40.066218   50980 command_runner.go:130] >       ],
	I0927 18:17:40.066225   50980 command_runner.go:130] >       "size": "742080",
	I0927 18:17:40.066234   50980 command_runner.go:130] >       "uid": {
	I0927 18:17:40.066242   50980 command_runner.go:130] >         "value": "65535"
	I0927 18:17:40.066294   50980 command_runner.go:130] >       },
	I0927 18:17:40.066303   50980 command_runner.go:130] >       "username": "",
	I0927 18:17:40.066309   50980 command_runner.go:130] >       "spec": null,
	I0927 18:17:40.066319   50980 command_runner.go:130] >       "pinned": true
	I0927 18:17:40.066326   50980 command_runner.go:130] >     }
	I0927 18:17:40.066336   50980 command_runner.go:130] >   ]
	I0927 18:17:40.066344   50980 command_runner.go:130] > }
	I0927 18:17:40.066519   50980 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:17:40.066543   50980 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:17:40.066555   50980 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.31.1 crio true true} ...
	I0927 18:17:40.066705   50980 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-922780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:17:40.066796   50980 ssh_runner.go:195] Run: crio config
	I0927 18:17:40.105427   50980 command_runner.go:130] ! time="2024-09-27 18:17:40.079602450Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0927 18:17:40.111548   50980 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0927 18:17:40.118321   50980 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0927 18:17:40.118350   50980 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0927 18:17:40.118360   50980 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0927 18:17:40.118365   50980 command_runner.go:130] > #
	I0927 18:17:40.118379   50980 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0927 18:17:40.118388   50980 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0927 18:17:40.118397   50980 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0927 18:17:40.118418   50980 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0927 18:17:40.118427   50980 command_runner.go:130] > # reload'.
	I0927 18:17:40.118437   50980 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0927 18:17:40.118448   50980 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0927 18:17:40.118456   50980 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0927 18:17:40.118461   50980 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0927 18:17:40.118473   50980 command_runner.go:130] > [crio]
	I0927 18:17:40.118482   50980 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0927 18:17:40.118487   50980 command_runner.go:130] > # containers images, in this directory.
	I0927 18:17:40.118492   50980 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0927 18:17:40.118505   50980 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0927 18:17:40.118513   50980 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0927 18:17:40.118521   50980 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0927 18:17:40.118525   50980 command_runner.go:130] > # imagestore = ""
	I0927 18:17:40.118533   50980 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0927 18:17:40.118538   50980 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0927 18:17:40.118543   50980 command_runner.go:130] > storage_driver = "overlay"
	I0927 18:17:40.118548   50980 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0927 18:17:40.118553   50980 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0927 18:17:40.118557   50980 command_runner.go:130] > storage_option = [
	I0927 18:17:40.118562   50980 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0927 18:17:40.118567   50980 command_runner.go:130] > ]
	I0927 18:17:40.118574   50980 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0927 18:17:40.118580   50980 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0927 18:17:40.118585   50980 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0927 18:17:40.118589   50980 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0927 18:17:40.118598   50980 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0927 18:17:40.118602   50980 command_runner.go:130] > # always happen on a node reboot
	I0927 18:17:40.118608   50980 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0927 18:17:40.118619   50980 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0927 18:17:40.118627   50980 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0927 18:17:40.118631   50980 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0927 18:17:40.118636   50980 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0927 18:17:40.118662   50980 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0927 18:17:40.118676   50980 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0927 18:17:40.118682   50980 command_runner.go:130] > # internal_wipe = true
	I0927 18:17:40.118689   50980 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0927 18:17:40.118696   50980 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0927 18:17:40.118700   50980 command_runner.go:130] > # internal_repair = false
	I0927 18:17:40.118713   50980 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0927 18:17:40.118721   50980 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0927 18:17:40.118727   50980 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0927 18:17:40.118734   50980 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0927 18:17:40.118742   50980 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0927 18:17:40.118747   50980 command_runner.go:130] > [crio.api]
	I0927 18:17:40.118752   50980 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0927 18:17:40.118759   50980 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0927 18:17:40.118764   50980 command_runner.go:130] > # IP address on which the stream server will listen.
	I0927 18:17:40.118769   50980 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0927 18:17:40.118775   50980 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0927 18:17:40.118782   50980 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0927 18:17:40.118785   50980 command_runner.go:130] > # stream_port = "0"
	I0927 18:17:40.118792   50980 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0927 18:17:40.118796   50980 command_runner.go:130] > # stream_enable_tls = false
	I0927 18:17:40.118803   50980 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0927 18:17:40.118809   50980 command_runner.go:130] > # stream_idle_timeout = ""
	I0927 18:17:40.118815   50980 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0927 18:17:40.118823   50980 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0927 18:17:40.118826   50980 command_runner.go:130] > # minutes.
	I0927 18:17:40.118830   50980 command_runner.go:130] > # stream_tls_cert = ""
	I0927 18:17:40.118835   50980 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0927 18:17:40.118843   50980 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0927 18:17:40.118847   50980 command_runner.go:130] > # stream_tls_key = ""
	I0927 18:17:40.118854   50980 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0927 18:17:40.118860   50980 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0927 18:17:40.118885   50980 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0927 18:17:40.118891   50980 command_runner.go:130] > # stream_tls_ca = ""
	I0927 18:17:40.118898   50980 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 18:17:40.118902   50980 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0927 18:17:40.118909   50980 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0927 18:17:40.118916   50980 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0927 18:17:40.118922   50980 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0927 18:17:40.118935   50980 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0927 18:17:40.118941   50980 command_runner.go:130] > [crio.runtime]
	I0927 18:17:40.118946   50980 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0927 18:17:40.118953   50980 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0927 18:17:40.118958   50980 command_runner.go:130] > # "nofile=1024:2048"
	I0927 18:17:40.118965   50980 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0927 18:17:40.118970   50980 command_runner.go:130] > # default_ulimits = [
	I0927 18:17:40.118975   50980 command_runner.go:130] > # ]
	I0927 18:17:40.118980   50980 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0927 18:17:40.118985   50980 command_runner.go:130] > # no_pivot = false
	I0927 18:17:40.118993   50980 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0927 18:17:40.119001   50980 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0927 18:17:40.119005   50980 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0927 18:17:40.119011   50980 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0927 18:17:40.119016   50980 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0927 18:17:40.119022   50980 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 18:17:40.119027   50980 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0927 18:17:40.119032   50980 command_runner.go:130] > # Cgroup setting for conmon
	I0927 18:17:40.119040   50980 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0927 18:17:40.119044   50980 command_runner.go:130] > conmon_cgroup = "pod"
	I0927 18:17:40.119052   50980 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0927 18:17:40.119057   50980 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0927 18:17:40.119065   50980 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0927 18:17:40.119069   50980 command_runner.go:130] > conmon_env = [
	I0927 18:17:40.119077   50980 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 18:17:40.119080   50980 command_runner.go:130] > ]
	I0927 18:17:40.119086   50980 command_runner.go:130] > # Additional environment variables to set for all the
	I0927 18:17:40.119091   50980 command_runner.go:130] > # containers. These are overridden if set in the
	I0927 18:17:40.119099   50980 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0927 18:17:40.119102   50980 command_runner.go:130] > # default_env = [
	I0927 18:17:40.119108   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119113   50980 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0927 18:17:40.119119   50980 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0927 18:17:40.119130   50980 command_runner.go:130] > # selinux = false
	I0927 18:17:40.119136   50980 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0927 18:17:40.119143   50980 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0927 18:17:40.119148   50980 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0927 18:17:40.119154   50980 command_runner.go:130] > # seccomp_profile = ""
	I0927 18:17:40.119159   50980 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0927 18:17:40.119165   50980 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0927 18:17:40.119171   50980 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0927 18:17:40.119177   50980 command_runner.go:130] > # which might increase security.
	I0927 18:17:40.119184   50980 command_runner.go:130] > # This option is currently deprecated,
	I0927 18:17:40.119192   50980 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0927 18:17:40.119197   50980 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0927 18:17:40.119205   50980 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0927 18:17:40.119210   50980 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0927 18:17:40.119220   50980 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0927 18:17:40.119228   50980 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0927 18:17:40.119233   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.119237   50980 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0927 18:17:40.119242   50980 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0927 18:17:40.119264   50980 command_runner.go:130] > # the cgroup blockio controller.
	I0927 18:17:40.119270   50980 command_runner.go:130] > # blockio_config_file = ""
	I0927 18:17:40.119276   50980 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0927 18:17:40.119282   50980 command_runner.go:130] > # blockio parameters.
	I0927 18:17:40.119286   50980 command_runner.go:130] > # blockio_reload = false
	I0927 18:17:40.119294   50980 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0927 18:17:40.119298   50980 command_runner.go:130] > # irqbalance daemon.
	I0927 18:17:40.119306   50980 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0927 18:17:40.119311   50980 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0927 18:17:40.119320   50980 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0927 18:17:40.119326   50980 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0927 18:17:40.119334   50980 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0927 18:17:40.119340   50980 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0927 18:17:40.119347   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.119356   50980 command_runner.go:130] > # rdt_config_file = ""
	I0927 18:17:40.119363   50980 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0927 18:17:40.119368   50980 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0927 18:17:40.119399   50980 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0927 18:17:40.119405   50980 command_runner.go:130] > # separate_pull_cgroup = ""
	I0927 18:17:40.119411   50980 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0927 18:17:40.119417   50980 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0927 18:17:40.119421   50980 command_runner.go:130] > # will be added.
	I0927 18:17:40.119425   50980 command_runner.go:130] > # default_capabilities = [
	I0927 18:17:40.119430   50980 command_runner.go:130] > # 	"CHOWN",
	I0927 18:17:40.119436   50980 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0927 18:17:40.119440   50980 command_runner.go:130] > # 	"FSETID",
	I0927 18:17:40.119444   50980 command_runner.go:130] > # 	"FOWNER",
	I0927 18:17:40.119447   50980 command_runner.go:130] > # 	"SETGID",
	I0927 18:17:40.119451   50980 command_runner.go:130] > # 	"SETUID",
	I0927 18:17:40.119457   50980 command_runner.go:130] > # 	"SETPCAP",
	I0927 18:17:40.119460   50980 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0927 18:17:40.119464   50980 command_runner.go:130] > # 	"KILL",
	I0927 18:17:40.119467   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119475   50980 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0927 18:17:40.119483   50980 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0927 18:17:40.119487   50980 command_runner.go:130] > # add_inheritable_capabilities = false
	I0927 18:17:40.119496   50980 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0927 18:17:40.119508   50980 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 18:17:40.119512   50980 command_runner.go:130] > default_sysctls = [
	I0927 18:17:40.119516   50980 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0927 18:17:40.119520   50980 command_runner.go:130] > ]
	I0927 18:17:40.119524   50980 command_runner.go:130] > # List of devices on the host that a
	I0927 18:17:40.119530   50980 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0927 18:17:40.119536   50980 command_runner.go:130] > # allowed_devices = [
	I0927 18:17:40.119539   50980 command_runner.go:130] > # 	"/dev/fuse",
	I0927 18:17:40.119542   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119547   50980 command_runner.go:130] > # List of additional devices. specified as
	I0927 18:17:40.119559   50980 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0927 18:17:40.119566   50980 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0927 18:17:40.119571   50980 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0927 18:17:40.119577   50980 command_runner.go:130] > # additional_devices = [
	I0927 18:17:40.119581   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119586   50980 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0927 18:17:40.119591   50980 command_runner.go:130] > # cdi_spec_dirs = [
	I0927 18:17:40.119595   50980 command_runner.go:130] > # 	"/etc/cdi",
	I0927 18:17:40.119599   50980 command_runner.go:130] > # 	"/var/run/cdi",
	I0927 18:17:40.119602   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119608   50980 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0927 18:17:40.119616   50980 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0927 18:17:40.119619   50980 command_runner.go:130] > # Defaults to false.
	I0927 18:17:40.119624   50980 command_runner.go:130] > # device_ownership_from_security_context = false
	I0927 18:17:40.119632   50980 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0927 18:17:40.119638   50980 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0927 18:17:40.119644   50980 command_runner.go:130] > # hooks_dir = [
	I0927 18:17:40.119648   50980 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0927 18:17:40.119651   50980 command_runner.go:130] > # ]
	I0927 18:17:40.119657   50980 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0927 18:17:40.119666   50980 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0927 18:17:40.119671   50980 command_runner.go:130] > # its default mounts from the following two files:
	I0927 18:17:40.119676   50980 command_runner.go:130] > #
	I0927 18:17:40.119682   50980 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0927 18:17:40.119690   50980 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0927 18:17:40.119696   50980 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0927 18:17:40.119701   50980 command_runner.go:130] > #
	I0927 18:17:40.119706   50980 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0927 18:17:40.119721   50980 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0927 18:17:40.119729   50980 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0927 18:17:40.119736   50980 command_runner.go:130] > #      only add mounts it finds in this file.
	I0927 18:17:40.119740   50980 command_runner.go:130] > #
	I0927 18:17:40.119744   50980 command_runner.go:130] > # default_mounts_file = ""
	I0927 18:17:40.119754   50980 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0927 18:17:40.119760   50980 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0927 18:17:40.119764   50980 command_runner.go:130] > pids_limit = 1024
	I0927 18:17:40.119769   50980 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0927 18:17:40.119775   50980 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0927 18:17:40.119780   50980 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0927 18:17:40.119788   50980 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0927 18:17:40.119791   50980 command_runner.go:130] > # log_size_max = -1
	I0927 18:17:40.119797   50980 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0927 18:17:40.119801   50980 command_runner.go:130] > # log_to_journald = false
	I0927 18:17:40.119807   50980 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0927 18:17:40.119811   50980 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0927 18:17:40.119816   50980 command_runner.go:130] > # Path to directory for container attach sockets.
	I0927 18:17:40.119824   50980 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0927 18:17:40.119829   50980 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0927 18:17:40.119834   50980 command_runner.go:130] > # bind_mount_prefix = ""
	I0927 18:17:40.119839   50980 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0927 18:17:40.119846   50980 command_runner.go:130] > # read_only = false
	I0927 18:17:40.119851   50980 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0927 18:17:40.119859   50980 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0927 18:17:40.119863   50980 command_runner.go:130] > # live configuration reload.
	I0927 18:17:40.119868   50980 command_runner.go:130] > # log_level = "info"
	I0927 18:17:40.119874   50980 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0927 18:17:40.119881   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.119885   50980 command_runner.go:130] > # log_filter = ""
	I0927 18:17:40.119891   50980 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0927 18:17:40.119900   50980 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0927 18:17:40.119903   50980 command_runner.go:130] > # separated by comma.
	I0927 18:17:40.119910   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.119916   50980 command_runner.go:130] > # uid_mappings = ""
	I0927 18:17:40.119921   50980 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0927 18:17:40.119927   50980 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0927 18:17:40.119933   50980 command_runner.go:130] > # separated by comma.
	I0927 18:17:40.119952   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.119960   50980 command_runner.go:130] > # gid_mappings = ""
	I0927 18:17:40.119966   50980 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0927 18:17:40.119974   50980 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 18:17:40.119983   50980 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 18:17:40.119993   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.119997   50980 command_runner.go:130] > # minimum_mappable_uid = -1
	I0927 18:17:40.120003   50980 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0927 18:17:40.120008   50980 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0927 18:17:40.120015   50980 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0927 18:17:40.120022   50980 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0927 18:17:40.120029   50980 command_runner.go:130] > # minimum_mappable_gid = -1
	I0927 18:17:40.120034   50980 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0927 18:17:40.120041   50980 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0927 18:17:40.120046   50980 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0927 18:17:40.120052   50980 command_runner.go:130] > # ctr_stop_timeout = 30
	I0927 18:17:40.120057   50980 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0927 18:17:40.120064   50980 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0927 18:17:40.120069   50980 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0927 18:17:40.120076   50980 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0927 18:17:40.120080   50980 command_runner.go:130] > drop_infra_ctr = false
	I0927 18:17:40.120086   50980 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0927 18:17:40.120093   50980 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0927 18:17:40.120100   50980 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0927 18:17:40.120106   50980 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0927 18:17:40.120112   50980 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0927 18:17:40.120119   50980 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0927 18:17:40.120124   50980 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0927 18:17:40.120131   50980 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0927 18:17:40.120135   50980 command_runner.go:130] > # shared_cpuset = ""
	I0927 18:17:40.120140   50980 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0927 18:17:40.120145   50980 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0927 18:17:40.120150   50980 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0927 18:17:40.120162   50980 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0927 18:17:40.120168   50980 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0927 18:17:40.120174   50980 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0927 18:17:40.120185   50980 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0927 18:17:40.120191   50980 command_runner.go:130] > # enable_criu_support = false
	I0927 18:17:40.120196   50980 command_runner.go:130] > # Enable/disable the generation of the container,
	I0927 18:17:40.120202   50980 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0927 18:17:40.120207   50980 command_runner.go:130] > # enable_pod_events = false
	I0927 18:17:40.120213   50980 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 18:17:40.120221   50980 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0927 18:17:40.120228   50980 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0927 18:17:40.120233   50980 command_runner.go:130] > # default_runtime = "runc"
	I0927 18:17:40.120238   50980 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0927 18:17:40.120247   50980 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0927 18:17:40.120267   50980 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0927 18:17:40.120274   50980 command_runner.go:130] > # creation as a file is not desired either.
	I0927 18:17:40.120282   50980 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0927 18:17:40.120287   50980 command_runner.go:130] > # the hostname is being managed dynamically.
	I0927 18:17:40.120292   50980 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0927 18:17:40.120295   50980 command_runner.go:130] > # ]
	I0927 18:17:40.120301   50980 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0927 18:17:40.120309   50980 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0927 18:17:40.120314   50980 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0927 18:17:40.120319   50980 command_runner.go:130] > # Each entry in the table should follow the format:
	I0927 18:17:40.120324   50980 command_runner.go:130] > #
	I0927 18:17:40.120329   50980 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0927 18:17:40.120335   50980 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0927 18:17:40.120381   50980 command_runner.go:130] > # runtime_type = "oci"
	I0927 18:17:40.120388   50980 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0927 18:17:40.120393   50980 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0927 18:17:40.120397   50980 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0927 18:17:40.120401   50980 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0927 18:17:40.120404   50980 command_runner.go:130] > # monitor_env = []
	I0927 18:17:40.120414   50980 command_runner.go:130] > # privileged_without_host_devices = false
	I0927 18:17:40.120419   50980 command_runner.go:130] > # allowed_annotations = []
	I0927 18:17:40.120425   50980 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0927 18:17:40.120430   50980 command_runner.go:130] > # Where:
	I0927 18:17:40.120435   50980 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0927 18:17:40.120441   50980 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0927 18:17:40.120449   50980 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0927 18:17:40.120456   50980 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0927 18:17:40.120463   50980 command_runner.go:130] > #   in $PATH.
	I0927 18:17:40.120469   50980 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0927 18:17:40.120476   50980 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0927 18:17:40.120482   50980 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0927 18:17:40.120488   50980 command_runner.go:130] > #   state.
	I0927 18:17:40.120494   50980 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0927 18:17:40.120502   50980 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0927 18:17:40.120510   50980 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0927 18:17:40.120516   50980 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0927 18:17:40.120523   50980 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0927 18:17:40.120529   50980 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0927 18:17:40.120536   50980 command_runner.go:130] > #   The currently recognized values are:
	I0927 18:17:40.120542   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0927 18:17:40.120551   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0927 18:17:40.120557   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0927 18:17:40.120563   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0927 18:17:40.120571   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0927 18:17:40.120579   50980 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0927 18:17:40.120585   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0927 18:17:40.120593   50980 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0927 18:17:40.120599   50980 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0927 18:17:40.120607   50980 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0927 18:17:40.120611   50980 command_runner.go:130] > #   deprecated option "conmon".
	I0927 18:17:40.120620   50980 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0927 18:17:40.120626   50980 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0927 18:17:40.120639   50980 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0927 18:17:40.120646   50980 command_runner.go:130] > #   should be moved to the container's cgroup
	I0927 18:17:40.120653   50980 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0927 18:17:40.120660   50980 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0927 18:17:40.120666   50980 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0927 18:17:40.120673   50980 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0927 18:17:40.120677   50980 command_runner.go:130] > #
	I0927 18:17:40.120681   50980 command_runner.go:130] > # Using the seccomp notifier feature:
	I0927 18:17:40.120687   50980 command_runner.go:130] > #
	I0927 18:17:40.120694   50980 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0927 18:17:40.120700   50980 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0927 18:17:40.120705   50980 command_runner.go:130] > #
	I0927 18:17:40.120711   50980 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0927 18:17:40.120719   50980 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0927 18:17:40.120721   50980 command_runner.go:130] > #
	I0927 18:17:40.120727   50980 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0927 18:17:40.120733   50980 command_runner.go:130] > # feature.
	I0927 18:17:40.120736   50980 command_runner.go:130] > #
	I0927 18:17:40.120742   50980 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0927 18:17:40.120750   50980 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0927 18:17:40.120756   50980 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0927 18:17:40.120764   50980 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0927 18:17:40.120770   50980 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0927 18:17:40.120775   50980 command_runner.go:130] > #
	I0927 18:17:40.120781   50980 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0927 18:17:40.120786   50980 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0927 18:17:40.120791   50980 command_runner.go:130] > #
	I0927 18:17:40.120797   50980 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0927 18:17:40.120803   50980 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0927 18:17:40.120806   50980 command_runner.go:130] > #
	I0927 18:17:40.120812   50980 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0927 18:17:40.120819   50980 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0927 18:17:40.120822   50980 command_runner.go:130] > # limitation.
	I0927 18:17:40.120833   50980 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0927 18:17:40.120840   50980 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0927 18:17:40.120843   50980 command_runner.go:130] > runtime_type = "oci"
	I0927 18:17:40.120850   50980 command_runner.go:130] > runtime_root = "/run/runc"
	I0927 18:17:40.120854   50980 command_runner.go:130] > runtime_config_path = ""
	I0927 18:17:40.120859   50980 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0927 18:17:40.120866   50980 command_runner.go:130] > monitor_cgroup = "pod"
	I0927 18:17:40.120870   50980 command_runner.go:130] > monitor_exec_cgroup = ""
	I0927 18:17:40.120874   50980 command_runner.go:130] > monitor_env = [
	I0927 18:17:40.120879   50980 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0927 18:17:40.120884   50980 command_runner.go:130] > ]
	I0927 18:17:40.120888   50980 command_runner.go:130] > privileged_without_host_devices = false
	I0927 18:17:40.120894   50980 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0927 18:17:40.120902   50980 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0927 18:17:40.120908   50980 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0927 18:17:40.120917   50980 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0927 18:17:40.120925   50980 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0927 18:17:40.120933   50980 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0927 18:17:40.120942   50980 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0927 18:17:40.120952   50980 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0927 18:17:40.120957   50980 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0927 18:17:40.120964   50980 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0927 18:17:40.120969   50980 command_runner.go:130] > # Example:
	I0927 18:17:40.120973   50980 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0927 18:17:40.120978   50980 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0927 18:17:40.120985   50980 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0927 18:17:40.120990   50980 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0927 18:17:40.120993   50980 command_runner.go:130] > # cpuset = 0
	I0927 18:17:40.120997   50980 command_runner.go:130] > # cpushares = "0-1"
	I0927 18:17:40.121003   50980 command_runner.go:130] > # Where:
	I0927 18:17:40.121007   50980 command_runner.go:130] > # The workload name is workload-type.
	I0927 18:17:40.121013   50980 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0927 18:17:40.121020   50980 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0927 18:17:40.121030   50980 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0927 18:17:40.121040   50980 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0927 18:17:40.121048   50980 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0927 18:17:40.121052   50980 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0927 18:17:40.121058   50980 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0927 18:17:40.121063   50980 command_runner.go:130] > # Default value is set to true
	I0927 18:17:40.121067   50980 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0927 18:17:40.121074   50980 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0927 18:17:40.121079   50980 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0927 18:17:40.121086   50980 command_runner.go:130] > # Default value is set to 'false'
	I0927 18:17:40.121090   50980 command_runner.go:130] > # disable_hostport_mapping = false
	I0927 18:17:40.121098   50980 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0927 18:17:40.121102   50980 command_runner.go:130] > #
	I0927 18:17:40.121107   50980 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0927 18:17:40.121112   50980 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0927 18:17:40.121117   50980 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0927 18:17:40.121123   50980 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0927 18:17:40.121128   50980 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0927 18:17:40.121134   50980 command_runner.go:130] > [crio.image]
	I0927 18:17:40.121140   50980 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0927 18:17:40.121143   50980 command_runner.go:130] > # default_transport = "docker://"
	I0927 18:17:40.121149   50980 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0927 18:17:40.121155   50980 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0927 18:17:40.121158   50980 command_runner.go:130] > # global_auth_file = ""
	I0927 18:17:40.121162   50980 command_runner.go:130] > # The image used to instantiate infra containers.
	I0927 18:17:40.121167   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.121171   50980 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0927 18:17:40.121177   50980 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0927 18:17:40.121185   50980 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0927 18:17:40.121190   50980 command_runner.go:130] > # This option supports live configuration reload.
	I0927 18:17:40.121195   50980 command_runner.go:130] > # pause_image_auth_file = ""
	I0927 18:17:40.121201   50980 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0927 18:17:40.121209   50980 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0927 18:17:40.121218   50980 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0927 18:17:40.121226   50980 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0927 18:17:40.121230   50980 command_runner.go:130] > # pause_command = "/pause"
	I0927 18:17:40.121238   50980 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0927 18:17:40.121244   50980 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0927 18:17:40.121253   50980 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0927 18:17:40.121263   50980 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0927 18:17:40.121269   50980 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0927 18:17:40.121277   50980 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0927 18:17:40.121281   50980 command_runner.go:130] > # pinned_images = [
	I0927 18:17:40.121286   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121291   50980 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0927 18:17:40.121299   50980 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0927 18:17:40.121305   50980 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0927 18:17:40.121311   50980 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0927 18:17:40.121316   50980 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0927 18:17:40.121321   50980 command_runner.go:130] > # signature_policy = ""
	I0927 18:17:40.121327   50980 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0927 18:17:40.121335   50980 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0927 18:17:40.121341   50980 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0927 18:17:40.121349   50980 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0927 18:17:40.121357   50980 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0927 18:17:40.121361   50980 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0927 18:17:40.121368   50980 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0927 18:17:40.121374   50980 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0927 18:17:40.121380   50980 command_runner.go:130] > # changing them here.
	I0927 18:17:40.121384   50980 command_runner.go:130] > # insecure_registries = [
	I0927 18:17:40.121387   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121393   50980 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0927 18:17:40.121400   50980 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0927 18:17:40.121404   50980 command_runner.go:130] > # image_volumes = "mkdir"
	I0927 18:17:40.121411   50980 command_runner.go:130] > # Temporary directory to use for storing big files
	I0927 18:17:40.121415   50980 command_runner.go:130] > # big_files_temporary_dir = ""
	I0927 18:17:40.121428   50980 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0927 18:17:40.121434   50980 command_runner.go:130] > # CNI plugins.
	I0927 18:17:40.121437   50980 command_runner.go:130] > [crio.network]
	I0927 18:17:40.121443   50980 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0927 18:17:40.121449   50980 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0927 18:17:40.121455   50980 command_runner.go:130] > # cni_default_network = ""
	I0927 18:17:40.121460   50980 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0927 18:17:40.121467   50980 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0927 18:17:40.121471   50980 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0927 18:17:40.121477   50980 command_runner.go:130] > # plugin_dirs = [
	I0927 18:17:40.121481   50980 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0927 18:17:40.121484   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121490   50980 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0927 18:17:40.121496   50980 command_runner.go:130] > [crio.metrics]
	I0927 18:17:40.121501   50980 command_runner.go:130] > # Globally enable or disable metrics support.
	I0927 18:17:40.121507   50980 command_runner.go:130] > enable_metrics = true
	I0927 18:17:40.121511   50980 command_runner.go:130] > # Specify enabled metrics collectors.
	I0927 18:17:40.121530   50980 command_runner.go:130] > # Per default all metrics are enabled.
	I0927 18:17:40.121542   50980 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0927 18:17:40.121548   50980 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0927 18:17:40.121556   50980 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0927 18:17:40.121560   50980 command_runner.go:130] > # metrics_collectors = [
	I0927 18:17:40.121563   50980 command_runner.go:130] > # 	"operations",
	I0927 18:17:40.121568   50980 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0927 18:17:40.121575   50980 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0927 18:17:40.121578   50980 command_runner.go:130] > # 	"operations_errors",
	I0927 18:17:40.121582   50980 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0927 18:17:40.121586   50980 command_runner.go:130] > # 	"image_pulls_by_name",
	I0927 18:17:40.121590   50980 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0927 18:17:40.121598   50980 command_runner.go:130] > # 	"image_pulls_failures",
	I0927 18:17:40.121606   50980 command_runner.go:130] > # 	"image_pulls_successes",
	I0927 18:17:40.121610   50980 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0927 18:17:40.121616   50980 command_runner.go:130] > # 	"image_layer_reuse",
	I0927 18:17:40.121625   50980 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0927 18:17:40.121631   50980 command_runner.go:130] > # 	"containers_oom_total",
	I0927 18:17:40.121635   50980 command_runner.go:130] > # 	"containers_oom",
	I0927 18:17:40.121641   50980 command_runner.go:130] > # 	"processes_defunct",
	I0927 18:17:40.121645   50980 command_runner.go:130] > # 	"operations_total",
	I0927 18:17:40.121649   50980 command_runner.go:130] > # 	"operations_latency_seconds",
	I0927 18:17:40.121653   50980 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0927 18:17:40.121658   50980 command_runner.go:130] > # 	"operations_errors_total",
	I0927 18:17:40.121664   50980 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0927 18:17:40.121669   50980 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0927 18:17:40.121674   50980 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0927 18:17:40.121678   50980 command_runner.go:130] > # 	"image_pulls_success_total",
	I0927 18:17:40.121685   50980 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0927 18:17:40.121689   50980 command_runner.go:130] > # 	"containers_oom_count_total",
	I0927 18:17:40.121693   50980 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0927 18:17:40.121699   50980 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0927 18:17:40.121703   50980 command_runner.go:130] > # ]
	I0927 18:17:40.121708   50980 command_runner.go:130] > # The port on which the metrics server will listen.
	I0927 18:17:40.121714   50980 command_runner.go:130] > # metrics_port = 9090
	I0927 18:17:40.121718   50980 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0927 18:17:40.121724   50980 command_runner.go:130] > # metrics_socket = ""
	I0927 18:17:40.121729   50980 command_runner.go:130] > # The certificate for the secure metrics server.
	I0927 18:17:40.121734   50980 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0927 18:17:40.121741   50980 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0927 18:17:40.121745   50980 command_runner.go:130] > # certificate on any modification event.
	I0927 18:17:40.121749   50980 command_runner.go:130] > # metrics_cert = ""
	I0927 18:17:40.121754   50980 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0927 18:17:40.121761   50980 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0927 18:17:40.121765   50980 command_runner.go:130] > # metrics_key = ""
	I0927 18:17:40.121772   50980 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0927 18:17:40.121775   50980 command_runner.go:130] > [crio.tracing]
	I0927 18:17:40.121781   50980 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0927 18:17:40.121785   50980 command_runner.go:130] > # enable_tracing = false
	I0927 18:17:40.121795   50980 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0927 18:17:40.121802   50980 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0927 18:17:40.121809   50980 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0927 18:17:40.121816   50980 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0927 18:17:40.121820   50980 command_runner.go:130] > # CRI-O NRI configuration.
	I0927 18:17:40.121825   50980 command_runner.go:130] > [crio.nri]
	I0927 18:17:40.121829   50980 command_runner.go:130] > # Globally enable or disable NRI.
	I0927 18:17:40.121833   50980 command_runner.go:130] > # enable_nri = false
	I0927 18:17:40.121839   50980 command_runner.go:130] > # NRI socket to listen on.
	I0927 18:17:40.121847   50980 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0927 18:17:40.121851   50980 command_runner.go:130] > # NRI plugin directory to use.
	I0927 18:17:40.121858   50980 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0927 18:17:40.121862   50980 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0927 18:17:40.121869   50980 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0927 18:17:40.121874   50980 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0927 18:17:40.121880   50980 command_runner.go:130] > # nri_disable_connections = false
	I0927 18:17:40.121885   50980 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0927 18:17:40.121891   50980 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0927 18:17:40.121896   50980 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0927 18:17:40.121903   50980 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0927 18:17:40.121908   50980 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0927 18:17:40.121912   50980 command_runner.go:130] > [crio.stats]
	I0927 18:17:40.121919   50980 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0927 18:17:40.121924   50980 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0927 18:17:40.121931   50980 command_runner.go:130] > # stats_collection_period = 0
	I0927 18:17:40.122051   50980 cni.go:84] Creating CNI manager for ""
	I0927 18:17:40.122061   50980 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0927 18:17:40.122069   50980 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:17:40.122090   50980 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-922780 NodeName:multinode-922780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:17:40.122216   50980 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-922780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:17:40.122281   50980 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:17:40.132358   50980 command_runner.go:130] > kubeadm
	I0927 18:17:40.132376   50980 command_runner.go:130] > kubectl
	I0927 18:17:40.132380   50980 command_runner.go:130] > kubelet
	I0927 18:17:40.132398   50980 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:17:40.132442   50980 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:17:40.142159   50980 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0927 18:17:40.160075   50980 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:17:40.177146   50980 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0927 18:17:40.195057   50980 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I0927 18:17:40.198855   50980 command_runner.go:130] > 192.168.39.6	control-plane.minikube.internal
	I0927 18:17:40.198913   50980 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:17:40.338989   50980 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:17:40.352872   50980 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780 for IP: 192.168.39.6
	I0927 18:17:40.352895   50980 certs.go:194] generating shared ca certs ...
	I0927 18:17:40.352915   50980 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:17:40.353079   50980 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:17:40.353132   50980 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:17:40.353145   50980 certs.go:256] generating profile certs ...
	I0927 18:17:40.353252   50980 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/client.key
	I0927 18:17:40.353359   50980 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.key.f36a82d8
	I0927 18:17:40.353411   50980 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.key
	I0927 18:17:40.353424   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0927 18:17:40.353450   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0927 18:17:40.353472   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 18:17:40.353505   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0927 18:17:40.353524   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0927 18:17:40.353545   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0927 18:17:40.353564   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0927 18:17:40.353580   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0927 18:17:40.353679   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:17:40.353723   50980 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:17:40.353737   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:17:40.353770   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:17:40.353799   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:17:40.353833   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:17:40.353885   50980 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:17:40.353925   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem -> /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.353945   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.353959   50980 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.354724   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:17:40.379370   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:17:40.403431   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:17:40.428263   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:17:40.454122   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0927 18:17:40.479716   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 18:17:40.503418   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:17:40.531523   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/multinode-922780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 18:17:40.554759   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:17:40.578996   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:17:40.602757   50980 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:17:40.626392   50980 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:17:40.643753   50980 ssh_runner.go:195] Run: openssl version
	I0927 18:17:40.649821   50980 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0927 18:17:40.649888   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:17:40.660387   50980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.664649   50980 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.664681   50980 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.664715   50980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:17:40.670147   50980 command_runner.go:130] > 51391683
	I0927 18:17:40.670209   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:17:40.679204   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:17:40.695642   50980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.700595   50980 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.700636   50980 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.700681   50980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:17:40.706517   50980 command_runner.go:130] > 3ec20f2e
	I0927 18:17:40.706601   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:17:40.716110   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:17:40.727290   50980 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.731755   50980 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.731792   50980 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.731847   50980 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:17:40.737405   50980 command_runner.go:130] > b5213941
	I0927 18:17:40.737482   50980 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:17:40.747636   50980 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:17:40.752077   50980 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:17:40.752104   50980 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0927 18:17:40.752111   50980 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I0927 18:17:40.752117   50980 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0927 18:17:40.752122   50980 command_runner.go:130] > Access: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752127   50980 command_runner.go:130] > Modify: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752132   50980 command_runner.go:130] > Change: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752136   50980 command_runner.go:130] >  Birth: 2024-09-27 18:11:04.533349029 +0000
	I0927 18:17:40.752194   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 18:17:40.757695   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.757765   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 18:17:40.763047   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.763114   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 18:17:40.768963   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.769117   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 18:17:40.774592   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.774671   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 18:17:40.779866   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.780090   50980 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 18:17:40.785588   50980 command_runner.go:130] > Certificate will not expire
	I0927 18:17:40.785656   50980 kubeadm.go:392] StartCluster: {Name:multinode-922780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-922780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.130 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:17:40.785787   50980 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:17:40.785836   50980 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:17:40.821604   50980 command_runner.go:130] > 4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67
	I0927 18:17:40.821633   50980 command_runner.go:130] > cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671
	I0927 18:17:40.821640   50980 command_runner.go:130] > a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b
	I0927 18:17:40.821649   50980 command_runner.go:130] > d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64
	I0927 18:17:40.821657   50980 command_runner.go:130] > 35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7
	I0927 18:17:40.821664   50980 command_runner.go:130] > 989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3
	I0927 18:17:40.821671   50980 command_runner.go:130] > 22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e
	I0927 18:17:40.821697   50980 command_runner.go:130] > 846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a
	I0927 18:17:40.821725   50980 cri.go:89] found id: "4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67"
	I0927 18:17:40.821735   50980 cri.go:89] found id: "cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671"
	I0927 18:17:40.821741   50980 cri.go:89] found id: "a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b"
	I0927 18:17:40.821747   50980 cri.go:89] found id: "d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64"
	I0927 18:17:40.821754   50980 cri.go:89] found id: "35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7"
	I0927 18:17:40.821760   50980 cri.go:89] found id: "989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3"
	I0927 18:17:40.821767   50980 cri.go:89] found id: "22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e"
	I0927 18:17:40.821772   50980 cri.go:89] found id: "846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a"
	I0927 18:17:40.821779   50980 cri.go:89] found id: ""
	I0927 18:17:40.821830   50980 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.691404719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461313691382364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3211e8f3-86e3-4403-9ff3-89565fa27490 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.691863753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89e56a40-e1a1-4c3c-8365-531f703b0120 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.691913828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89e56a40-e1a1-4c3c-8365-531f703b0120 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.692346726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89e56a40-e1a1-4c3c-8365-531f703b0120 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.740079730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55a41395-8169-49a0-b157-0de0d4b14588 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.740193962Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55a41395-8169-49a0-b157-0de0d4b14588 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.741516672Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06e68140-e679-43e9-a3c5-e2f176e4da2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.741930948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461313741908447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06e68140-e679-43e9-a3c5-e2f176e4da2e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.742758164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bac55176-4752-4d79-9615-4cc07999dfae name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.742820939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bac55176-4752-4d79-9615-4cc07999dfae name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.743289571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bac55176-4752-4d79-9615-4cc07999dfae name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.782844881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=510685b8-8046-475d-bd2b-a11d207e6890 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.782918394Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=510685b8-8046-475d-bd2b-a11d207e6890 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.783941328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b75e16e-81e4-4748-a15b-6c53f43ed15f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.784500825Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461313784473791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b75e16e-81e4-4748-a15b-6c53f43ed15f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.784934350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=693ef714-45be-464c-8841-d485851bda87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.784996333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=693ef714-45be-464c-8841-d485851bda87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.786699118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=693ef714-45be-464c-8841-d485851bda87 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.824831960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42edf6da-b653-4adf-93a8-7a385ca37e65 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.824926165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42edf6da-b653-4adf-93a8-7a385ca37e65 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.833748456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4097f069-5b6f-4e22-8538-729786ad3008 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.834358658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461313834331264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4097f069-5b6f-4e22-8538-729786ad3008 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.834888285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3abd1eef-78b9-4510-9f93-5c97e2f380bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.834967934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3abd1eef-78b9-4510-9f93-5c97e2f380bb name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:21:53 multinode-922780 crio[2687]: time="2024-09-27 18:21:53.835364925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:165e7a35fbfffc49b73aa1bdd67023a9c8ec63b2dc6f39add8a250b13807a84c,PodSandboxId:fdc267c7df3650982235182f74f7775c4edade28dfd607dbe86b86ee3e5a1bf4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1727461102167812701,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618,PodSandboxId:410d0c9cfe4e3d8cd2c8710237785bb8318efe8884f25f441b815b66187c18f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727461068811732962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f,PodSandboxId:4b6fd3691433a472e8a881b8fcdc19c13efc3621a2b01bca062908e11a87312f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1727461068511800581,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 892f5465-49f4-4449-b924-80278
5752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7,PodSandboxId:aecfe5cec08800f3cfd61e57dd0c7b1b6096298e62260389980abd590bb25e66,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727461068551248038,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b-cb5c172b8586,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a882adf5cf23721dd7ddc5bd6d009af13ce651f3136785ed9d55bff7263a579,PodSandboxId:b647f3457b7d53f6cc5cabdea15dd4917aade07792b378d376c9f2349346247f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461068446642476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe,PodSandboxId:9add222ee9e65ed084cc15b21ae6b5bc6bd97fe1b8e1ac267be3d1b1230bf5be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727461063078939267,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25,PodSandboxId:df7d0191d0e467b68272795b1caad166d65e5a65dfd884bbc99bd8a650eeff99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727461063070263423,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df
2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f,PodSandboxId:aa64fa5966ff0b38b5af0dc5ddf8ead361cbde041733365cd50c9ea3397c77f5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727461063051126767,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984,PodSandboxId:4d03d86bdc3d61057695cb11bda6ec5aa1fbd4213c1282751e12db2c41d0ce4d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727461063031627656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ed58a33b82c75dbe82e1f6e74c90b09def10e4c411f1e57753afceb222f373,PodSandboxId:cf9b758a85a4ffbee4ba7111ece605ed8d38d29ed7c6434b8827dd33de220eb7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1727460744806791575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-b4wjc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60ecf5ff-8716-46fa-be17-3a79465fa1bb,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67,PodSandboxId:61a617a9bfbdb8ad6547c052e4bcad53128bdc6e895dccae01423ab288376c5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727460690586918750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-44fmt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f5a1e22-2666-4526-a0d4-872a13ed8dd0,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cca4a7c828a0ebd2eeae3a927e6db485929f6d0db41dddd58d88eaa3c1172671,PodSandboxId:f5ea05780e1d0a67e9b4e7268c1741f7a21a3ee1da017e98b3a18905d3af2645,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727460690542691810,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 56b91b01-4f2e-4e97-a817-d3c1399688a9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b,PodSandboxId:c6cb30da32a92dbd27440bf5c746baeef512accea71bf5352b2ed85fb64d7c52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1727460678947754969,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-998kf,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 892f5465-49f4-4449-b924-802785752ddd,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64,PodSandboxId:fbe5d4d74bd77a7ca86182d0c687cbc3e0c7f0373bb8a31d02e2c4d7db77d9d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727460678773052691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5mznw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95f38a43-a74c-4f6b-ac5b
-cb5c172b8586,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7,PodSandboxId:81ace1646a07b271649a6ce95b2510036e666d6221028b19d2b8d2a8f5ab34d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727460668176078036,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5393ebddc0c3fb1d9bf4c6c50054bcde,},Annotations:map[string]string
{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3,PodSandboxId:ce23b4635c818cc715c5b73a52bdb985acfdb7bcbb8eeab9fe0611ff0257e88d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727460668170693891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d8cad4fffabacc42295bb83553f9862,},Annotations:map[string]string{io.kubernetes.
container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e,PodSandboxId:1701dd96be1c7edbeaf391c5e01cfc0a732a4deb53fdfb9ee227bebdf721c24a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727460668163214496,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb20b0304f404681b37f01465c9749a9,},Annotations:map[string]string{io.kubernetes.container.hash:
7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a,PodSandboxId:268366d188b92599a57a01cc9cde110040f4dafd8fa7dbea3d2ae20fb5849d0a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727460668089267288,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-922780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b48abf1d5de0be5e6aed878d010028,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3abd1eef-78b9-4510-9f93-5c97e2f380bb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	165e7a35fbfff       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   fdc267c7df365       busybox-7dff88458-b4wjc
	3184ec5c3cd3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   410d0c9cfe4e3       coredns-7c65d6cfc9-44fmt
	e242d9cd69ad3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   aecfe5cec0880       kube-proxy-5mznw
	23d491ec919d6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   4b6fd3691433a       kindnet-998kf
	7a882adf5cf23       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   b647f3457b7d5       storage-provisioner
	afff55be12fce       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   9add222ee9e65       kube-scheduler-multinode-922780
	ada30153748b8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   df7d0191d0e46       kube-apiserver-multinode-922780
	f7ef79385aeb2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   aa64fa5966ff0       kube-controller-manager-multinode-922780
	c77bfcd7006ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   4d03d86bdc3d6       etcd-multinode-922780
	51ed58a33b82c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   cf9b758a85a4f       busybox-7dff88458-b4wjc
	4b4e8ab3f4b6e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   61a617a9bfbdb       coredns-7c65d6cfc9-44fmt
	cca4a7c828a0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   f5ea05780e1d0       storage-provisioner
	a965ab4d2b3e0       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   c6cb30da32a92       kindnet-998kf
	d085955bc4917       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   fbe5d4d74bd77       kube-proxy-5mznw
	35e86781cf3ca       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   81ace1646a07b       etcd-multinode-922780
	989cab852d99e       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   ce23b4635c818       kube-scheduler-multinode-922780
	22e0a85d544be       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   1701dd96be1c7       kube-apiserver-multinode-922780
	846a04b06f43d       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   268366d188b92       kube-controller-manager-multinode-922780
	
	
	==> coredns [3184ec5c3cd3d2ed5299f60bce49f3d87daabe1728182c1ba8afdaa60b961618] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46871 - 3832 "HINFO IN 3286544022602867680.7406210148567013429. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0101406s
	
	
	==> coredns [4b4e8ab3f4b6e4c617c8ea01fe4588c0bab788daff252155b84cb6a03ff8ad67] <==
	[INFO] 10.244.1.2:51647 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002012227s
	[INFO] 10.244.1.2:39368 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000114744s
	[INFO] 10.244.1.2:50155 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000266853s
	[INFO] 10.244.1.2:33796 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001847074s
	[INFO] 10.244.1.2:45834 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082588s
	[INFO] 10.244.1.2:43515 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001466s
	[INFO] 10.244.1.2:37248 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072721s
	[INFO] 10.244.0.3:40017 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119609s
	[INFO] 10.244.0.3:50048 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067618s
	[INFO] 10.244.0.3:56012 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051131s
	[INFO] 10.244.0.3:40755 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097414s
	[INFO] 10.244.1.2:51235 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018039s
	[INFO] 10.244.1.2:48451 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130222s
	[INFO] 10.244.1.2:33568 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076099s
	[INFO] 10.244.1.2:48476 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088972s
	[INFO] 10.244.0.3:41501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000085151s
	[INFO] 10.244.0.3:45234 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163836s
	[INFO] 10.244.0.3:39921 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101893s
	[INFO] 10.244.0.3:44887 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000068439s
	[INFO] 10.244.1.2:41027 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000260438s
	[INFO] 10.244.1.2:59660 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000077846s
	[INFO] 10.244.1.2:56509 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000076323s
	[INFO] 10.244.1.2:57417 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000066676s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-922780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=multinode-922780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T18_11_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:11:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922780
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:21:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:17:46 +0000   Fri, 27 Sep 2024 18:11:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    multinode-922780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5f6d57dad0044f45b917aa623008f382
	  System UUID:                5f6d57da-d004-4f45-b917-aa623008f382
	  Boot ID:                    446d1f84-bf62-41a7-94ce-14673a478468
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-b4wjc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 coredns-7c65d6cfc9-44fmt                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-922780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-998kf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-922780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-922780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-5mznw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-922780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-922780 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-922780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-922780 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-922780 event: Registered Node multinode-922780 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-922780 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node multinode-922780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node multinode-922780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node multinode-922780 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                   node-controller  Node multinode-922780 event: Registered Node multinode-922780 in Controller
	
	
	Name:               multinode-922780-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-922780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=multinode-922780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_27T18_18_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:18:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-922780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:19:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:20:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:20:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:20:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 27 Sep 2024 18:18:58 +0000   Fri, 27 Sep 2024 18:20:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    multinode-922780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f7a4de44d4d4b1dbba10366435f44d4
	  System UUID:                7f7a4de4-4d4d-4b1d-bba1-0366435f44d4
	  Boot ID:                    10978c1b-be8e-468f-9ed6-668d13bef83b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-222pq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kindnet-45qxg              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m56s
	  kube-system                 kube-proxy-bqkzm           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  Starting                 9m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m56s (x2 over 9m56s)  kubelet          Node multinode-922780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x2 over 9m56s)  kubelet          Node multinode-922780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x2 over 9m56s)  kubelet          Node multinode-922780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m36s                  kubelet          Node multinode-922780-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-922780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-922780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-922780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-922780-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-922780-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055261] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058393] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.195905] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.126465] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.265573] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[Sep27 18:11] systemd-fstab-generator[744]: Ignoring "noauto" option for root device
	[  +3.746348] systemd-fstab-generator[875]: Ignoring "noauto" option for root device
	[  +0.063440] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994833] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.074919] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.617860] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.501929] kauditd_printk_skb: 46 callbacks suppressed
	[ +12.243867] kauditd_printk_skb: 41 callbacks suppressed
	[Sep27 18:12] kauditd_printk_skb: 14 callbacks suppressed
	[Sep27 18:17] systemd-fstab-generator[2612]: Ignoring "noauto" option for root device
	[  +0.144123] systemd-fstab-generator[2624]: Ignoring "noauto" option for root device
	[  +0.169577] systemd-fstab-generator[2638]: Ignoring "noauto" option for root device
	[  +0.143091] systemd-fstab-generator[2651]: Ignoring "noauto" option for root device
	[  +0.282156] systemd-fstab-generator[2679]: Ignoring "noauto" option for root device
	[  +0.692620] systemd-fstab-generator[2770]: Ignoring "noauto" option for root device
	[  +1.891157] systemd-fstab-generator[2890]: Ignoring "noauto" option for root device
	[  +6.169390] kauditd_printk_skb: 184 callbacks suppressed
	[  +9.040677] kauditd_printk_skb: 34 callbacks suppressed
	[Sep27 18:18] systemd-fstab-generator[3733]: Ignoring "noauto" option for root device
	[ +19.430119] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [35e86781cf3ca260b85cdff8576d071b252f585ada59fd2b6c1fe0b73b43e0d7] <==
	{"level":"warn","ts":"2024-09-27T18:12:04.467419Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.898885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922780-m02\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-27T18:12:04.467591Z","caller":"traceutil/trace.go:171","msg":"trace[1105620344] range","detail":"{range_begin:/registry/minions/multinode-922780-m02; range_end:; response_count:1; response_revision:476; }","duration":"129.082638ms","start":"2024-09-27T18:12:04.338492Z","end":"2024-09-27T18:12:04.467575Z","steps":["trace[1105620344] 'range keys from in-memory index tree'  (duration: 128.811504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:12:53.805128Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.892673ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11349231815928092018 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-922780-m03.17f92c6bcb6950b8\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-922780-m03.17f92c6bcb6950b8\" value_size:646 lease:2125859779073315907 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-27T18:12:53.805458Z","caller":"traceutil/trace.go:171","msg":"trace[2013861784] linearizableReadLoop","detail":"{readStateIndex:606; appliedIndex:605; }","duration":"223.911209ms","start":"2024-09-27T18:12:53.581523Z","end":"2024-09-27T18:12:53.805434Z","steps":["trace[2013861784] 'read index received'  (duration: 85.007791ms)","trace[2013861784] 'applied index is now lower than readState.Index'  (duration: 138.902236ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T18:12:53.805522Z","caller":"traceutil/trace.go:171","msg":"trace[105043843] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"230.955725ms","start":"2024-09-27T18:12:53.574544Z","end":"2024-09-27T18:12:53.805500Z","steps":["trace[105043843] 'process raft request'  (duration: 91.972833ms)","trace[105043843] 'compare'  (duration: 137.765138ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-27T18:12:53.805665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.135473ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-922780-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T18:12:53.805708Z","caller":"traceutil/trace.go:171","msg":"trace[616286611] range","detail":"{range_begin:/registry/csinodes/multinode-922780-m03; range_end:; response_count:0; response_revision:573; }","duration":"224.183938ms","start":"2024-09-27T18:12:53.581517Z","end":"2024-09-27T18:12:53.805701Z","steps":["trace[616286611] 'agreement among raft nodes before linearized reading'  (duration: 224.060057ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:12:53.805794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.212518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-922780-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T18:12:53.805833Z","caller":"traceutil/trace.go:171","msg":"trace[2136683949] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-922780-m03; range_end:; response_count:0; response_revision:573; }","duration":"224.25523ms","start":"2024-09-27T18:12:53.581571Z","end":"2024-09-27T18:12:53.805827Z","steps":["trace[2136683949] 'agreement among raft nodes before linearized reading'  (duration: 224.196297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:12:53.805941Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.518237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922780-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T18:12:53.805987Z","caller":"traceutil/trace.go:171","msg":"trace[1082279073] range","detail":"{range_begin:/registry/minions/multinode-922780-m03; range_end:; response_count:0; response_revision:573; }","duration":"112.565858ms","start":"2024-09-27T18:12:53.693414Z","end":"2024-09-27T18:12:53.805980Z","steps":["trace[1082279073] 'agreement among raft nodes before linearized reading'  (duration: 112.504532ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T18:13:01.394597Z","caller":"traceutil/trace.go:171","msg":"trace[439980453] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"198.520349ms","start":"2024-09-27T18:13:01.196059Z","end":"2024-09-27T18:13:01.394579Z","steps":["trace[439980453] 'read index received'  (duration: 198.370344ms)","trace[439980453] 'applied index is now lower than readState.Index'  (duration: 149.474µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-27T18:13:01.395021Z","caller":"traceutil/trace.go:171","msg":"trace[1482407675] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"227.677722ms","start":"2024-09-27T18:13:01.167329Z","end":"2024-09-27T18:13:01.395006Z","steps":["trace[1482407675] 'process raft request'  (duration: 227.149255ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T18:13:01.395197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.087998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-922780-m03\" ","response":"range_response_count:1 size:2894"}
	{"level":"info","ts":"2024-09-27T18:13:01.395581Z","caller":"traceutil/trace.go:171","msg":"trace[475392158] range","detail":"{range_begin:/registry/minions/multinode-922780-m03; range_end:; response_count:1; response_revision:614; }","duration":"199.532631ms","start":"2024-09-27T18:13:01.196037Z","end":"2024-09-27T18:13:01.395569Z","steps":["trace[475392158] 'agreement among raft nodes before linearized reading'  (duration: 199.017369ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T18:16:07.511342Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-27T18:16:07.511500Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-922780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	{"level":"warn","ts":"2024-09-27T18:16:07.511657Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T18:16:07.511780Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T18:16:07.588757Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-27T18:16:07.588823Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.6:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-27T18:16:07.588891Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6f26d2d338759d80","current-leader-member-id":"6f26d2d338759d80"}
	{"level":"info","ts":"2024-09-27T18:16:07.591874Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:16:07.592112Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:16:07.592192Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-922780","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"]}
	
	
	==> etcd [c77bfcd7006ad56b02529735b0e5d30b18b2b0dbd652fe4745e7aa2dfb546984] <==
	{"level":"info","ts":"2024-09-27T18:17:43.424354Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:17:43.424436Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","added-peer-id":"6f26d2d338759d80","added-peer-peer-urls":["https://192.168.39.6:2380"]}
	{"level":"info","ts":"2024-09-27T18:17:43.424603Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:17:43.424923Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:17:43.470437Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T18:17:43.472466Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6f26d2d338759d80","initial-advertise-peer-urls":["https://192.168.39.6:2380"],"listen-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:17:43.472512Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:17:43.479167Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:17:43.479276Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-09-27T18:17:45.077747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T18:17:45.077832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T18:17:45.077876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2024-09-27T18:17:45.077891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.077897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.077908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.077916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-09-27T18:17:45.085215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:17:45.085526Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:17:45.085229Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:multinode-922780 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:17:45.086011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T18:17:45.086049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T18:17:45.086625Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:17:45.086633Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:17:45.087437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2024-09-27T18:17:45.087988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:21:54 up 11 min,  0 users,  load average: 0.19, 0.32, 0.19
	Linux multinode-922780 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [23d491ec919d66d42479d81bf2bd85c73077eabee291756a20aab2e2bf68c45f] <==
	I0927 18:20:49.529873       1 main.go:299] handling current node
	I0927 18:20:59.534681       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:20:59.534793       1 main.go:299] handling current node
	I0927 18:20:59.534822       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:20:59.534829       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:21:09.537818       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:21:09.537878       1 main.go:299] handling current node
	I0927 18:21:09.537899       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:21:09.537907       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:21:19.537777       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:21:19.537940       1 main.go:299] handling current node
	I0927 18:21:19.538012       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:21:19.538031       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:21:29.536040       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:21:29.536244       1 main.go:299] handling current node
	I0927 18:21:29.536288       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:21:29.536307       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:21:39.535947       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:21:39.535992       1 main.go:299] handling current node
	I0927 18:21:39.536027       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:21:39.536034       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:21:49.529199       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:21:49.529364       1 main.go:299] handling current node
	I0927 18:21:49.529408       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:21:49.529435       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [a965ab4d2b3e0a767abee519953e3dc32dc94de51a63b53782241e4067b0b78b] <==
	I0927 18:15:19.920905       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:29.920029       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:29.920116       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:29.920341       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:29.920363       1 main.go:299] handling current node
	I0927 18:15:29.920380       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:29.920385       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:39.926304       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:39.926369       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:39.926506       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:39.926526       1 main.go:299] handling current node
	I0927 18:15:39.926540       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:39.926545       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:49.920236       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:49.920289       1 main.go:299] handling current node
	I0927 18:15:49.920310       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:49.920318       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:49.920495       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:49.920521       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	I0927 18:15:59.926703       1 main.go:295] Handling node with IPs: map[192.168.39.6:{}]
	I0927 18:15:59.926851       1 main.go:299] handling current node
	I0927 18:15:59.926883       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0927 18:15:59.926906       1 main.go:322] Node multinode-922780-m02 has CIDR [10.244.1.0/24] 
	I0927 18:15:59.927065       1 main.go:295] Handling node with IPs: map[192.168.39.130:{}]
	I0927 18:15:59.927091       1 main.go:322] Node multinode-922780-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [22e0a85d544be9389b777b6576f49ca65c373ec45e24fc0e1cdc330c4518f09e] <==
	I0927 18:11:12.055316       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:11:12.105557       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:11:12.188739       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0927 18:11:12.196038       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.6]
	I0927 18:11:12.197023       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 18:11:12.204336       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 18:11:12.454840       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:11:13.372926       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:11:13.404658       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0927 18:11:13.416501       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:11:17.906950       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0927 18:11:18.157376       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0927 18:12:26.055321       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51268: use of closed network connection
	E0927 18:12:26.224033       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51296: use of closed network connection
	E0927 18:12:26.414610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51302: use of closed network connection
	E0927 18:12:26.588865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:51316: use of closed network connection
	E0927 18:12:26.749487       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50030: use of closed network connection
	E0927 18:12:26.911972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50040: use of closed network connection
	E0927 18:12:27.185623       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50054: use of closed network connection
	E0927 18:12:27.344610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50068: use of closed network connection
	E0927 18:12:27.510967       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50082: use of closed network connection
	E0927 18:12:27.671301       1 conn.go:339] Error on socket receive: read tcp 192.168.39.6:8443->192.168.39.1:50098: use of closed network connection
	I0927 18:16:07.510255       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0927 18:16:07.533658       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0927 18:16:07.540285       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ada30153748b8cca6ca07dea23cec72a98fc8447b4f22aaf35d153d0aded1b25] <==
	I0927 18:17:46.357925       1 aggregator.go:171] initial CRD sync complete...
	I0927 18:17:46.358085       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 18:17:46.358166       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 18:17:46.400267       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 18:17:46.408011       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 18:17:46.408226       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 18:17:46.408257       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 18:17:46.408366       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 18:17:46.408569       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 18:17:46.408674       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 18:17:46.408881       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	E0927 18:17:46.420010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 18:17:46.433521       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 18:17:46.444892       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 18:17:46.445008       1 policy_source.go:224] refreshing policies
	I0927 18:17:46.448417       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 18:17:46.459775       1 cache.go:39] Caches are synced for autoregister controller
	I0927 18:17:47.303702       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 18:17:48.607055       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:17:48.970009       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:17:49.001729       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:17:49.145870       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:17:49.152639       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:17:49.875521       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0927 18:17:50.125740       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [846a04b06f43de08076e27afa5ffb474db4bac4cff16d0f9fb7862d9e7831d5a] <==
	I0927 18:13:41.739178       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:13:41.739327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:42.898781       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-922780-m03\" does not exist"
	I0927 18:13:42.899677       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:13:42.928253       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922780-m03" podCIDRs=["10.244.4.0/24"]
	I0927 18:13:42.928340       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:42.928399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:42.928447       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:43.262243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:43.590030       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:47.558589       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:13:53.039873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:02.043413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:02.043973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:14:02.054663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:02.475205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:42.492022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:14:42.492535       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m03"
	I0927 18:14:42.517465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:14:42.554898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.578088ms"
	I0927 18:14:42.555293       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.073µs"
	I0927 18:14:47.554296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:47.572811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:14:47.644610       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:14:57.724215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	
	
	==> kube-controller-manager [f7ef79385aeb2e5a546484acdbcf46951c37ee93d8a4b2bd56f1420686a9963f] <==
	I0927 18:19:05.661647       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-922780-m03" podCIDRs=["10.244.2.0/24"]
	I0927 18:19:05.664355       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.664463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.667427       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:05.682303       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:06.001564       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:09.978779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:16.019654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:24.006658       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:24.007559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:19:24.020124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:24.888010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:28.605384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:28.624969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:29.187904       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m03"
	I0927 18:19:29.188539       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-922780-m02"
	I0927 18:20:09.764989       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-p98m2"
	I0927 18:20:09.799935       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-p98m2"
	I0927 18:20:09.799993       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8jsf9"
	I0927 18:20:09.832716       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8jsf9"
	I0927 18:20:09.907777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:20:09.935539       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	I0927 18:20:09.970361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.230039ms"
	I0927 18:20:09.970913       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.093µs"
	I0927 18:20:14.995251       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-922780-m02"
	
	
	==> kube-proxy [d085955bc4917c90649a5b49d7917d3832819316ae03eb33a23180fb79ec0a64] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:11:18.953574       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:11:18.965058       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0927 18:11:18.965430       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:11:19.000562       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:11:19.000592       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:11:19.000615       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:11:19.002792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:11:19.003067       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:11:19.003115       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:11:19.005028       1 config.go:199] "Starting service config controller"
	I0927 18:11:19.005051       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:11:19.005070       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:11:19.005074       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:11:19.005533       1 config.go:328] "Starting node config controller"
	I0927 18:11:19.005561       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:11:19.105241       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 18:11:19.105324       1 shared_informer.go:320] Caches are synced for service config
	I0927 18:11:19.105595       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e242d9cd69ad375829ad40e90a01c106d8a9c6645abd5f43073be998fa2ce9b7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:17:49.007799       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:17:49.023818       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.6"]
	E0927 18:17:49.023977       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:17:49.075004       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:17:49.075219       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:17:49.075357       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:17:49.079400       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:17:49.080269       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:17:49.080388       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:17:49.087410       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:17:49.087477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:17:49.088037       1 config.go:328] "Starting node config controller"
	I0927 18:17:49.088058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:17:49.089061       1 config.go:199] "Starting service config controller"
	I0927 18:17:49.089089       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:17:49.187575       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 18:17:49.188782       1 shared_informer.go:320] Caches are synced for node config
	I0927 18:17:49.189972       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [989cab852d99e34f27249c2d6214b246ac2094aa33ff0db11d30596d374871d3] <==
	E0927 18:11:10.494175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 18:11:10.494271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494317       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:10.494343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 18:11:10.494431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:10.494481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:10.494504       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.317736       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:11.317903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.374986       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 18:11:11.375052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.574742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 18:11:11.574796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.594324       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 18:11:11.594430       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.620913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0927 18:11:11.620965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 18:11:11.723662       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 18:11:11.724004       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0927 18:11:11.791378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 18:11:11.791426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0927 18:11:14.282700       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0927 18:16:07.521840       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [afff55be12fceac4f96142abe6e41926a4eb90a26cfff1bd2c80f6dae48949fe] <==
	I0927 18:17:44.089266       1 serving.go:386] Generated self-signed cert in-memory
	W0927 18:17:46.370851       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:17:46.370990       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:17:46.371021       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:17:46.371091       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:17:46.404363       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 18:17:46.406221       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:17:46.411052       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 18:17:46.412299       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 18:17:46.412417       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:17:46.412462       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 18:17:46.513233       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 18:20:42 multinode-922780 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 18:20:42 multinode-922780 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 18:20:42 multinode-922780 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 18:20:42 multinode-922780 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 18:20:42 multinode-922780 kubelet[2897]: E0927 18:20:42.466467    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461242464403387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:20:42 multinode-922780 kubelet[2897]: E0927 18:20:42.466554    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461242464403387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:20:52 multinode-922780 kubelet[2897]: E0927 18:20:52.472915    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461252469667055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:20:52 multinode-922780 kubelet[2897]: E0927 18:20:52.473484    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461252469667055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:02 multinode-922780 kubelet[2897]: E0927 18:21:02.480192    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461262479228656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:02 multinode-922780 kubelet[2897]: E0927 18:21:02.480804    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461262479228656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:12 multinode-922780 kubelet[2897]: E0927 18:21:12.484329    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461272483013368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:12 multinode-922780 kubelet[2897]: E0927 18:21:12.484364    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461272483013368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:22 multinode-922780 kubelet[2897]: E0927 18:21:22.486913    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461282485672367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:22 multinode-922780 kubelet[2897]: E0927 18:21:22.486955    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461282485672367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:32 multinode-922780 kubelet[2897]: E0927 18:21:32.488235    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461292487676834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:32 multinode-922780 kubelet[2897]: E0927 18:21:32.488308    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461292487676834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:42 multinode-922780 kubelet[2897]: E0927 18:21:42.409923    2897 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 27 18:21:42 multinode-922780 kubelet[2897]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 27 18:21:42 multinode-922780 kubelet[2897]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 27 18:21:42 multinode-922780 kubelet[2897]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 27 18:21:42 multinode-922780 kubelet[2897]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 27 18:21:42 multinode-922780 kubelet[2897]: E0927 18:21:42.490673    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461302490396329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:42 multinode-922780 kubelet[2897]: E0927 18:21:42.490716    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461302490396329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:52 multinode-922780 kubelet[2897]: E0927 18:21:52.492994    2897 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461312492576252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:21:52 multinode-922780 kubelet[2897]: E0927 18:21:52.493108    2897 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461312492576252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 18:21:53.430799   53365 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19712-11184/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-922780 -n multinode-922780
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-922780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.62s)

                                                
                                    
x
+
TestPreload (190.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-384202 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-384202 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m27.595113845s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-384202 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-384202 image pull gcr.io/k8s-minikube/busybox: (3.456339138s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-384202
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-384202: (7.282374972s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-384202 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-384202 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.041852941s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-384202 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-09-27 18:28:56.460704607 +0000 UTC m=+5584.675905164
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-384202 -n test-preload-384202
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-384202 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-384202 logs -n 25: (1.094749787s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780 sudo cat                                       | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m03_multinode-922780.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt                       | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m02:/home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n                                                                 | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | multinode-922780-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-922780 ssh -n multinode-922780-m02 sudo cat                                   | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	|         | /home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-922780 node stop m03                                                          | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:13 UTC |
	| node    | multinode-922780 node start                                                             | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:13 UTC | 27 Sep 24 18:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:14 UTC |                     |
	| stop    | -p multinode-922780                                                                     | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:14 UTC |                     |
	| start   | -p multinode-922780                                                                     | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:16 UTC | 27 Sep 24 18:19 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC |                     |
	| node    | multinode-922780 node delete                                                            | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC | 27 Sep 24 18:19 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-922780 stop                                                                   | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:19 UTC |                     |
	| start   | -p multinode-922780                                                                     | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:21 UTC | 27 Sep 24 18:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-922780                                                                | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC |                     |
	| start   | -p multinode-922780-m02                                                                 | multinode-922780-m02 | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-922780-m03                                                                 | multinode-922780-m03 | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC | 27 Sep 24 18:25 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-922780                                                                 | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC |                     |
	| delete  | -p multinode-922780-m03                                                                 | multinode-922780-m03 | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC | 27 Sep 24 18:25 UTC |
	| delete  | -p multinode-922780                                                                     | multinode-922780     | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC | 27 Sep 24 18:25 UTC |
	| start   | -p test-preload-384202                                                                  | test-preload-384202  | jenkins | v1.34.0 | 27 Sep 24 18:25 UTC | 27 Sep 24 18:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-384202 image pull                                                          | test-preload-384202  | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-384202                                                                  | test-preload-384202  | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:27 UTC |
	| start   | -p test-preload-384202                                                                  | test-preload-384202  | jenkins | v1.34.0 | 27 Sep 24 18:27 UTC | 27 Sep 24 18:28 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-384202 image list                                                          | test-preload-384202  | jenkins | v1.34.0 | 27 Sep 24 18:28 UTC | 27 Sep 24 18:28 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:27:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:27:27.244914   55766 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:27:27.245051   55766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:27:27.245063   55766 out.go:358] Setting ErrFile to fd 2...
	I0927 18:27:27.245070   55766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:27:27.245492   55766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:27:27.246061   55766 out.go:352] Setting JSON to false
	I0927 18:27:27.246918   55766 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7792,"bootTime":1727453855,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:27:27.247009   55766 start.go:139] virtualization: kvm guest
	I0927 18:27:27.249283   55766 out.go:177] * [test-preload-384202] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:27:27.250672   55766 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:27:27.250703   55766 notify.go:220] Checking for updates...
	I0927 18:27:27.253220   55766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:27:27.254393   55766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:27:27.255671   55766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:27:27.256949   55766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:27:27.258171   55766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:27:27.260121   55766 config.go:182] Loaded profile config "test-preload-384202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0927 18:27:27.260579   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:27:27.260660   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:27:27.276484   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45053
	I0927 18:27:27.277010   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:27:27.277806   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:27:27.277841   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:27:27.278169   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:27:27.278450   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:27.280418   55766 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0927 18:27:27.281575   55766 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:27:27.281930   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:27:27.281979   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:27:27.296280   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45239
	I0927 18:27:27.296689   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:27:27.297112   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:27:27.297139   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:27:27.297432   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:27:27.297621   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:27.331964   55766 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:27:27.333206   55766 start.go:297] selected driver: kvm2
	I0927 18:27:27.333220   55766 start.go:901] validating driver "kvm2" against &{Name:test-preload-384202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-384202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:27:27.333332   55766 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:27:27.334016   55766 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:27:27.334090   55766 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:27:27.348915   55766 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:27:27.349308   55766 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:27:27.349342   55766 cni.go:84] Creating CNI manager for ""
	I0927 18:27:27.349384   55766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:27:27.349441   55766 start.go:340] cluster config:
	{Name:test-preload-384202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-384202 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:27:27.349550   55766 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:27:27.351255   55766 out.go:177] * Starting "test-preload-384202" primary control-plane node in "test-preload-384202" cluster
	I0927 18:27:27.352427   55766 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0927 18:27:27.451457   55766 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0927 18:27:27.451482   55766 cache.go:56] Caching tarball of preloaded images
	I0927 18:27:27.451658   55766 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0927 18:27:27.453271   55766 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0927 18:27:27.454525   55766 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0927 18:27:27.556741   55766 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0927 18:27:38.895724   55766 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0927 18:27:38.895824   55766 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0927 18:27:39.739666   55766 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0927 18:27:39.739793   55766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/config.json ...
	I0927 18:27:39.740024   55766 start.go:360] acquireMachinesLock for test-preload-384202: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:27:39.740086   55766 start.go:364] duration metric: took 41.892µs to acquireMachinesLock for "test-preload-384202"
	I0927 18:27:39.740100   55766 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:27:39.740105   55766 fix.go:54] fixHost starting: 
	I0927 18:27:39.740453   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:27:39.740511   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:27:39.755500   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33689
	I0927 18:27:39.755967   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:27:39.756507   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:27:39.756533   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:27:39.756873   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:27:39.757041   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:39.757187   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetState
	I0927 18:27:39.758929   55766 fix.go:112] recreateIfNeeded on test-preload-384202: state=Stopped err=<nil>
	I0927 18:27:39.758949   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	W0927 18:27:39.759112   55766 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:27:39.761100   55766 out.go:177] * Restarting existing kvm2 VM for "test-preload-384202" ...
	I0927 18:27:39.762352   55766 main.go:141] libmachine: (test-preload-384202) Calling .Start
	I0927 18:27:39.762507   55766 main.go:141] libmachine: (test-preload-384202) Ensuring networks are active...
	I0927 18:27:39.763337   55766 main.go:141] libmachine: (test-preload-384202) Ensuring network default is active
	I0927 18:27:39.763722   55766 main.go:141] libmachine: (test-preload-384202) Ensuring network mk-test-preload-384202 is active
	I0927 18:27:39.764044   55766 main.go:141] libmachine: (test-preload-384202) Getting domain xml...
	I0927 18:27:39.764753   55766 main.go:141] libmachine: (test-preload-384202) Creating domain...
	I0927 18:27:40.991657   55766 main.go:141] libmachine: (test-preload-384202) Waiting to get IP...
	I0927 18:27:40.992585   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:40.992903   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:40.992985   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:40.992899   55850 retry.go:31] will retry after 250.364015ms: waiting for machine to come up
	I0927 18:27:41.245692   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:41.246105   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:41.246129   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:41.246070   55850 retry.go:31] will retry after 264.39756ms: waiting for machine to come up
	I0927 18:27:41.512660   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:41.513016   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:41.513040   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:41.512974   55850 retry.go:31] will retry after 319.039596ms: waiting for machine to come up
	I0927 18:27:41.833662   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:41.834110   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:41.834141   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:41.834061   55850 retry.go:31] will retry after 549.39509ms: waiting for machine to come up
	I0927 18:27:42.384835   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:42.385239   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:42.385265   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:42.385200   55850 retry.go:31] will retry after 738.864622ms: waiting for machine to come up
	I0927 18:27:43.126245   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:43.126661   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:43.126713   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:43.126602   55850 retry.go:31] will retry after 811.03553ms: waiting for machine to come up
	I0927 18:27:43.939742   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:43.940307   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:43.940331   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:43.940255   55850 retry.go:31] will retry after 1.107445805s: waiting for machine to come up
	I0927 18:27:45.049798   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:45.050206   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:45.050243   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:45.050175   55850 retry.go:31] will retry after 944.675333ms: waiting for machine to come up
	I0927 18:27:45.996302   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:45.996771   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:45.996797   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:45.996724   55850 retry.go:31] will retry after 1.220538666s: waiting for machine to come up
	I0927 18:27:47.219125   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:47.219568   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:47.219613   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:47.219516   55850 retry.go:31] will retry after 1.547525478s: waiting for machine to come up
	I0927 18:27:48.768705   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:48.769200   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:48.769228   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:48.769147   55850 retry.go:31] will retry after 2.014805843s: waiting for machine to come up
	I0927 18:27:50.786014   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:50.786537   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:50.786579   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:50.786498   55850 retry.go:31] will retry after 3.288941061s: waiting for machine to come up
	I0927 18:27:54.079177   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:54.079635   55766 main.go:141] libmachine: (test-preload-384202) DBG | unable to find current IP address of domain test-preload-384202 in network mk-test-preload-384202
	I0927 18:27:54.079666   55766 main.go:141] libmachine: (test-preload-384202) DBG | I0927 18:27:54.079567   55850 retry.go:31] will retry after 3.569084734s: waiting for machine to come up
	I0927 18:27:57.652163   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.652711   55766 main.go:141] libmachine: (test-preload-384202) Found IP for machine: 192.168.39.165
	I0927 18:27:57.652729   55766 main.go:141] libmachine: (test-preload-384202) Reserving static IP address...
	I0927 18:27:57.652743   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has current primary IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.653108   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "test-preload-384202", mac: "52:54:00:16:68:ff", ip: "192.168.39.165"} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:57.653141   55766 main.go:141] libmachine: (test-preload-384202) Reserved static IP address: 192.168.39.165
	I0927 18:27:57.653162   55766 main.go:141] libmachine: (test-preload-384202) DBG | skip adding static IP to network mk-test-preload-384202 - found existing host DHCP lease matching {name: "test-preload-384202", mac: "52:54:00:16:68:ff", ip: "192.168.39.165"}
	I0927 18:27:57.653175   55766 main.go:141] libmachine: (test-preload-384202) Waiting for SSH to be available...
	I0927 18:27:57.653185   55766 main.go:141] libmachine: (test-preload-384202) DBG | Getting to WaitForSSH function...
	I0927 18:27:57.655610   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.655907   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:57.655933   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.656049   55766 main.go:141] libmachine: (test-preload-384202) DBG | Using SSH client type: external
	I0927 18:27:57.656078   55766 main.go:141] libmachine: (test-preload-384202) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa (-rw-------)
	I0927 18:27:57.656107   55766 main.go:141] libmachine: (test-preload-384202) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.165 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:27:57.656121   55766 main.go:141] libmachine: (test-preload-384202) DBG | About to run SSH command:
	I0927 18:27:57.656133   55766 main.go:141] libmachine: (test-preload-384202) DBG | exit 0
	I0927 18:27:57.778497   55766 main.go:141] libmachine: (test-preload-384202) DBG | SSH cmd err, output: <nil>: 
	I0927 18:27:57.778860   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetConfigRaw
	I0927 18:27:57.779540   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetIP
	I0927 18:27:57.782368   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.782756   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:57.782784   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.783021   55766 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/config.json ...
	I0927 18:27:57.783216   55766 machine.go:93] provisionDockerMachine start ...
	I0927 18:27:57.783232   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:57.783425   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:57.785383   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.785694   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:57.785733   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.785882   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:57.786029   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:57.786198   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:57.786339   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:57.786503   55766 main.go:141] libmachine: Using SSH client type: native
	I0927 18:27:57.786734   55766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0927 18:27:57.786747   55766 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:27:57.890749   55766 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 18:27:57.890779   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetMachineName
	I0927 18:27:57.891017   55766 buildroot.go:166] provisioning hostname "test-preload-384202"
	I0927 18:27:57.891043   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetMachineName
	I0927 18:27:57.891189   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:57.893868   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.894217   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:57.894248   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:57.894358   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:57.894552   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:57.894734   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:57.894884   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:57.895041   55766 main.go:141] libmachine: Using SSH client type: native
	I0927 18:27:57.895288   55766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0927 18:27:57.895307   55766 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-384202 && echo "test-preload-384202" | sudo tee /etc/hostname
	I0927 18:27:58.017938   55766 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-384202
	
	I0927 18:27:58.017970   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.020928   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.021458   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.021499   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.021711   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.021909   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.022113   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.022267   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.022468   55766 main.go:141] libmachine: Using SSH client type: native
	I0927 18:27:58.022698   55766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0927 18:27:58.022720   55766 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-384202' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-384202/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-384202' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:27:58.134845   55766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:27:58.134876   55766 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:27:58.134905   55766 buildroot.go:174] setting up certificates
	I0927 18:27:58.134916   55766 provision.go:84] configureAuth start
	I0927 18:27:58.134925   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetMachineName
	I0927 18:27:58.135174   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetIP
	I0927 18:27:58.137591   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.137882   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.137922   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.138022   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.140177   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.140533   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.140563   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.140663   55766 provision.go:143] copyHostCerts
	I0927 18:27:58.140729   55766 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:27:58.140741   55766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:27:58.140827   55766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:27:58.140939   55766 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:27:58.140958   55766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:27:58.140999   55766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:27:58.141093   55766 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:27:58.141102   55766 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:27:58.141125   55766 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:27:58.141177   55766 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.test-preload-384202 san=[127.0.0.1 192.168.39.165 localhost minikube test-preload-384202]
	I0927 18:27:58.282025   55766 provision.go:177] copyRemoteCerts
	I0927 18:27:58.282093   55766 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:27:58.282124   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.284865   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.285178   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.285204   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.285402   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.285589   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.285782   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.285926   55766 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa Username:docker}
	I0927 18:27:58.368411   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:27:58.391568   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0927 18:27:58.414424   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 18:27:58.437602   55766 provision.go:87] duration metric: took 302.672915ms to configureAuth
	I0927 18:27:58.437633   55766 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:27:58.437804   55766 config.go:182] Loaded profile config "test-preload-384202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0927 18:27:58.437876   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.440890   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.441297   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.441324   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.441556   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.441719   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.441877   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.442006   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.442173   55766 main.go:141] libmachine: Using SSH client type: native
	I0927 18:27:58.442363   55766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0927 18:27:58.442385   55766 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:27:58.663822   55766 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:27:58.663860   55766 machine.go:96] duration metric: took 880.629784ms to provisionDockerMachine
	I0927 18:27:58.663876   55766 start.go:293] postStartSetup for "test-preload-384202" (driver="kvm2")
	I0927 18:27:58.663894   55766 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:27:58.663920   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:58.664215   55766 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:27:58.664265   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.666864   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.667290   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.667317   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.667444   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.667659   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.667836   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.667963   55766 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa Username:docker}
	I0927 18:27:58.749147   55766 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:27:58.753317   55766 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:27:58.753341   55766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:27:58.753415   55766 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:27:58.753527   55766 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:27:58.753669   55766 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:27:58.762596   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:27:58.784936   55766 start.go:296] duration metric: took 121.040778ms for postStartSetup
	I0927 18:27:58.784992   55766 fix.go:56] duration metric: took 19.044880347s for fixHost
	I0927 18:27:58.785016   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.787533   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.787938   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.787956   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.788139   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.788343   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.788490   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.788658   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.788882   55766 main.go:141] libmachine: Using SSH client type: native
	I0927 18:27:58.789062   55766 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.165 22 <nil> <nil>}
	I0927 18:27:58.789075   55766 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:27:58.895305   55766 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727461678.870257874
	
	I0927 18:27:58.895340   55766 fix.go:216] guest clock: 1727461678.870257874
	I0927 18:27:58.895348   55766 fix.go:229] Guest: 2024-09-27 18:27:58.870257874 +0000 UTC Remote: 2024-09-27 18:27:58.784997434 +0000 UTC m=+31.575161630 (delta=85.26044ms)
	I0927 18:27:58.895394   55766 fix.go:200] guest clock delta is within tolerance: 85.26044ms
	I0927 18:27:58.895403   55766 start.go:83] releasing machines lock for "test-preload-384202", held for 19.155307796s
	I0927 18:27:58.895425   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:58.895727   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetIP
	I0927 18:27:58.898480   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.898915   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.898942   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.899105   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:58.899642   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:58.899837   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:27:58.899914   55766 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:27:58.899978   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.900049   55766 ssh_runner.go:195] Run: cat /version.json
	I0927 18:27:58.900078   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:27:58.902639   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.903010   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.903034   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.903155   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.903199   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.903388   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.903551   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.903594   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:27:58.903636   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:27:58.903709   55766 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa Username:docker}
	I0927 18:27:58.903931   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:27:58.904081   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:27:58.904232   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:27:58.904341   55766 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa Username:docker}
	I0927 18:27:59.022683   55766 ssh_runner.go:195] Run: systemctl --version
	I0927 18:27:59.028642   55766 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:27:59.171930   55766 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 18:27:59.177512   55766 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:27:59.177582   55766 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:27:59.193193   55766 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 18:27:59.193217   55766 start.go:495] detecting cgroup driver to use...
	I0927 18:27:59.193284   55766 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:27:59.209721   55766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:27:59.223229   55766 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:27:59.223334   55766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:27:59.236307   55766 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:27:59.249713   55766 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:27:59.363663   55766 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:27:59.505652   55766 docker.go:233] disabling docker service ...
	I0927 18:27:59.505734   55766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:27:59.519810   55766 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:27:59.532317   55766 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:27:59.672004   55766 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:27:59.791051   55766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:27:59.805043   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:27:59.823328   55766 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0927 18:27:59.823384   55766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.833734   55766 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:27:59.833802   55766 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.843997   55766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.854315   55766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.864390   55766 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:27:59.874494   55766 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.884302   55766 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.900762   55766 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:27:59.910710   55766 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:27:59.919583   55766 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 18:27:59.919651   55766 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 18:27:59.931702   55766 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:27:59.941150   55766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:28:00.057608   55766 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:28:00.147829   55766 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:28:00.147898   55766 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:28:00.152399   55766 start.go:563] Will wait 60s for crictl version
	I0927 18:28:00.152469   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:00.156180   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:28:00.193829   55766 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:28:00.193908   55766 ssh_runner.go:195] Run: crio --version
	I0927 18:28:00.221427   55766 ssh_runner.go:195] Run: crio --version
	I0927 18:28:00.249629   55766 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0927 18:28:00.251330   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetIP
	I0927 18:28:00.253839   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:28:00.254172   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:28:00.254203   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:28:00.254420   55766 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 18:28:00.258334   55766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:28:00.270432   55766 kubeadm.go:883] updating cluster {Name:test-preload-384202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-384202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:28:00.270554   55766 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0927 18:28:00.270604   55766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:28:00.304773   55766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0927 18:28:00.304838   55766 ssh_runner.go:195] Run: which lz4
	I0927 18:28:00.308887   55766 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 18:28:00.312943   55766 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 18:28:00.312982   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0927 18:28:01.771840   55766 crio.go:462] duration metric: took 1.463018639s to copy over tarball
	I0927 18:28:01.771919   55766 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 18:28:04.239836   55766 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.467874116s)
	I0927 18:28:04.239878   55766 crio.go:469] duration metric: took 2.468009777s to extract the tarball
	I0927 18:28:04.239887   55766 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 18:28:04.280588   55766 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:28:04.328022   55766 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0927 18:28:04.328046   55766 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 18:28:04.328113   55766 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:04.328131   55766 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0927 18:28:04.328104   55766 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:28:04.328172   55766 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:04.328209   55766 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.328268   55766 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.328270   55766 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.328327   55766 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:04.329713   55766 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:04.329725   55766 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.329730   55766 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:04.329737   55766 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:04.329714   55766 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.329752   55766 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0927 18:28:04.329773   55766 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.329877   55766 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:28:04.538140   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.573650   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.579516   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.588135   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:04.589359   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:04.590585   55766 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0927 18:28:04.590618   55766 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.590708   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.595193   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0927 18:28:04.610687   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:04.717295   55766 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0927 18:28:04.717343   55766 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0927 18:28:04.717344   55766 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.717368   55766 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.717407   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.717408   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.738063   55766 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0927 18:28:04.738108   55766 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:04.738154   55766 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0927 18:28:04.738182   55766 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:04.738218   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.738156   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.738250   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.741613   55766 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0927 18:28:04.741636   55766 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0927 18:28:04.741651   55766 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:04.741656   55766 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0927 18:28:04.741692   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.741700   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.741692   55766 ssh_runner.go:195] Run: which crictl
	I0927 18:28:04.741728   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.809595   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.809708   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:04.809710   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:04.813696   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0927 18:28:04.813753   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.813836   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:04.813885   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.905873   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:04.956065   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:04.956134   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0927 18:28:04.988508   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0927 18:28:04.988613   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0927 18:28:04.988680   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0927 18:28:04.988772   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:05.017923   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0927 18:28:05.062271   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0927 18:28:05.062273   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0927 18:28:05.062396   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0927 18:28:05.155926   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0927 18:28:05.155959   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0927 18:28:05.156036   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0927 18:28:05.156035   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0927 18:28:05.156077   55766 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0927 18:28:05.156132   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0927 18:28:05.168290   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0927 18:28:05.168348   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0927 18:28:05.168366   55766 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0927 18:28:05.168367   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0927 18:28:05.168417   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0927 18:28:05.168401   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0927 18:28:05.168507   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0927 18:28:05.220368   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0927 18:28:05.220490   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0927 18:28:05.227521   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0927 18:28:05.227625   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0927 18:28:05.227698   55766 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0927 18:28:05.227712   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0927 18:28:05.227809   55766 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0927 18:28:05.761415   55766 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:28:07.851994   55766 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.683457914s)
	I0927 18:28:07.852040   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0927 18:28:07.852052   55766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.683612458s)
	I0927 18:28:07.852076   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0927 18:28:07.852103   55766 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0927 18:28:07.852131   55766 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.624304027s)
	I0927 18:28:07.852150   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0927 18:28:07.852164   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0927 18:28:07.852174   55766 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.090730381s)
	I0927 18:28:07.852105   55766 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.631595598s)
	I0927 18:28:07.852224   55766 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0927 18:28:08.594750   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0927 18:28:08.594798   55766 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0927 18:28:08.594853   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0927 18:28:09.345532   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0927 18:28:09.345592   55766 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0927 18:28:09.345713   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0927 18:28:11.594280   55766 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.248533184s)
	I0927 18:28:11.594313   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0927 18:28:11.594342   55766 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0927 18:28:11.594394   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0927 18:28:12.436463   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0927 18:28:12.436506   55766 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0927 18:28:12.436556   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0927 18:28:12.784052   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0927 18:28:12.784100   55766 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0927 18:28:12.784156   55766 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0927 18:28:12.937003   55766 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0927 18:28:12.937050   55766 cache_images.go:123] Successfully loaded all cached images
	I0927 18:28:12.937055   55766 cache_images.go:92] duration metric: took 8.608998816s to LoadCachedImages
	I0927 18:28:12.937065   55766 kubeadm.go:934] updating node { 192.168.39.165 8443 v1.24.4 crio true true} ...
	I0927 18:28:12.937202   55766 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-384202 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.165
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-384202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:28:12.937303   55766 ssh_runner.go:195] Run: crio config
	I0927 18:28:12.983561   55766 cni.go:84] Creating CNI manager for ""
	I0927 18:28:12.983582   55766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:28:12.983591   55766 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:28:12.983607   55766 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.165 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-384202 NodeName:test-preload-384202 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.165"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.165 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:28:12.983770   55766 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.165
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-384202"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.165
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.165"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:28:12.983835   55766 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0927 18:28:12.993397   55766 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:28:12.993483   55766 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:28:13.002512   55766 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0927 18:28:13.018276   55766 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:28:13.034608   55766 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0927 18:28:13.051662   55766 ssh_runner.go:195] Run: grep 192.168.39.165	control-plane.minikube.internal$ /etc/hosts
	I0927 18:28:13.055297   55766 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.165	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:28:13.066949   55766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:28:13.188938   55766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:28:13.205619   55766 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202 for IP: 192.168.39.165
	I0927 18:28:13.205641   55766 certs.go:194] generating shared ca certs ...
	I0927 18:28:13.205655   55766 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:28:13.205835   55766 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:28:13.205997   55766 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:28:13.206030   55766 certs.go:256] generating profile certs ...
	I0927 18:28:13.206173   55766 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/client.key
	I0927 18:28:13.206247   55766 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/apiserver.key.bce457bf
	I0927 18:28:13.206308   55766 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/proxy-client.key
	I0927 18:28:13.206474   55766 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:28:13.206507   55766 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:28:13.206523   55766 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:28:13.206552   55766 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:28:13.206584   55766 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:28:13.206611   55766 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:28:13.206702   55766 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:28:13.207372   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:28:13.248343   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:28:13.278414   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:28:13.319265   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:28:13.348420   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0927 18:28:13.377703   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 18:28:13.413936   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:28:13.437221   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 18:28:13.461350   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:28:13.484737   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:28:13.507385   55766 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:28:13.529856   55766 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:28:13.545818   55766 ssh_runner.go:195] Run: openssl version
	I0927 18:28:13.551729   55766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:28:13.562292   55766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:28:13.566942   55766 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:28:13.566992   55766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:28:13.572658   55766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:28:13.583436   55766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:28:13.598120   55766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:28:13.602624   55766 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:28:13.602706   55766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:28:13.608708   55766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:28:13.618673   55766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:28:13.628792   55766 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:28:13.633088   55766 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:28:13.633150   55766 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:28:13.638482   55766 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:28:13.648527   55766 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:28:13.652850   55766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 18:28:13.658572   55766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 18:28:13.664402   55766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 18:28:13.670294   55766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 18:28:13.676145   55766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 18:28:13.681938   55766 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 18:28:13.687754   55766 kubeadm.go:392] StartCluster: {Name:test-preload-384202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-384202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:28:13.687855   55766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:28:13.687916   55766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:28:13.724158   55766 cri.go:89] found id: ""
	I0927 18:28:13.724242   55766 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 18:28:13.734524   55766 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0927 18:28:13.734546   55766 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0927 18:28:13.734605   55766 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0927 18:28:13.743766   55766 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0927 18:28:13.744254   55766 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-384202" does not appear in /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:28:13.744369   55766 kubeconfig.go:62] /home/jenkins/minikube-integration/19712-11184/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-384202" cluster setting kubeconfig missing "test-preload-384202" context setting]
	I0927 18:28:13.744644   55766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:28:13.745333   55766 kapi.go:59] client config for test-preload-384202: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 18:28:13.745956   55766 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0927 18:28:13.755551   55766 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.165
	I0927 18:28:13.755589   55766 kubeadm.go:1160] stopping kube-system containers ...
	I0927 18:28:13.755598   55766 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0927 18:28:13.755643   55766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:28:13.789025   55766 cri.go:89] found id: ""
	I0927 18:28:13.789094   55766 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0927 18:28:13.804924   55766 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 18:28:13.814613   55766 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 18:28:13.814640   55766 kubeadm.go:157] found existing configuration files:
	
	I0927 18:28:13.814717   55766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 18:28:13.823027   55766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 18:28:13.823079   55766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 18:28:13.831891   55766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 18:28:13.840315   55766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 18:28:13.840394   55766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 18:28:13.849025   55766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 18:28:13.857239   55766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 18:28:13.857307   55766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 18:28:13.865761   55766 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 18:28:13.874310   55766 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 18:28:13.874401   55766 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 18:28:13.883500   55766 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 18:28:13.892192   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:28:13.990723   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:28:14.934175   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:28:15.177996   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:28:15.247577   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:28:15.313526   55766 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:28:15.313621   55766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:28:15.813894   55766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:28:16.313757   55766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:28:16.330702   55766 api_server.go:72] duration metric: took 1.017176533s to wait for apiserver process to appear ...
	I0927 18:28:16.330734   55766 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:28:16.330755   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:16.331229   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": dial tcp 192.168.39.165:8443: connect: connection refused
	I0927 18:28:16.830857   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:16.831515   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": dial tcp 192.168.39.165:8443: connect: connection refused
	I0927 18:28:17.331735   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:22.332337   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 18:28:22.332377   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:27.333187   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 18:28:27.333235   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:32.333572   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 18:28:32.333633   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:37.334184   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0927 18:28:37.334223   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:37.420691   55766 api_server.go:269] stopped: https://192.168.39.165:8443/healthz: Get "https://192.168.39.165:8443/healthz": read tcp 192.168.39.1:34610->192.168.39.165:8443: read: connection reset by peer
	I0927 18:28:37.831884   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:40.496743   55766 api_server.go:279] https://192.168.39.165:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 18:28:40.496781   55766 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 18:28:40.496821   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:40.560860   55766 api_server.go:279] https://192.168.39.165:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 18:28:40.560895   55766 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 18:28:40.831328   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:40.838785   55766 api_server.go:279] https://192.168.39.165:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:28:40.838816   55766 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:28:41.331492   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:41.337164   55766 api_server.go:279] https://192.168.39.165:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:28:41.337203   55766 api_server.go:103] status: https://192.168.39.165:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:28:41.831869   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:41.844369   55766 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0927 18:28:41.853717   55766 api_server.go:141] control plane version: v1.24.4
	I0927 18:28:41.853745   55766 api_server.go:131] duration metric: took 25.523004009s to wait for apiserver health ...
	I0927 18:28:41.853755   55766 cni.go:84] Creating CNI manager for ""
	I0927 18:28:41.853761   55766 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:28:41.855830   55766 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 18:28:41.857449   55766 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 18:28:41.867903   55766 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 18:28:41.888964   55766 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:28:41.889048   55766 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 18:28:41.889064   55766 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 18:28:41.910444   55766 system_pods.go:59] 8 kube-system pods found
	I0927 18:28:41.910480   55766 system_pods.go:61] "coredns-6d4b75cb6d-6tjgq" [a24d740b-e7a1-43af-a4ca-a55c3ec7b80b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 18:28:41.910493   55766 system_pods.go:61] "coredns-6d4b75cb6d-scjdv" [b3c7f0fc-beb5-4776-9dca-71bda146b4c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 18:28:41.910498   55766 system_pods.go:61] "etcd-test-preload-384202" [5178c150-b9cf-4d0b-aed8-e1cd77f62b36] Running
	I0927 18:28:41.910503   55766 system_pods.go:61] "kube-apiserver-test-preload-384202" [abf42838-6803-4c20-a4ec-9efbef213f49] Running
	I0927 18:28:41.910506   55766 system_pods.go:61] "kube-controller-manager-test-preload-384202" [469c69e3-9db2-4957-9c22-b15b178a778d] Running
	I0927 18:28:41.910510   55766 system_pods.go:61] "kube-proxy-rj49w" [699ee90a-2fd5-4277-939c-75fb7aa461d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 18:28:41.910516   55766 system_pods.go:61] "kube-scheduler-test-preload-384202" [ba64471b-da6a-4ee6-b503-8e72a54e18b2] Running
	I0927 18:28:41.910525   55766 system_pods.go:61] "storage-provisioner" [7ed32aea-3808-4dc9-a3db-ecd5694b227d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0927 18:28:41.910531   55766 system_pods.go:74] duration metric: took 21.54051ms to wait for pod list to return data ...
	I0927 18:28:41.910537   55766 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:28:41.913734   55766 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:28:41.913763   55766 node_conditions.go:123] node cpu capacity is 2
	I0927 18:28:41.913773   55766 node_conditions.go:105] duration metric: took 3.232046ms to run NodePressure ...
	I0927 18:28:41.913790   55766 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:28:42.141991   55766 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 18:28:42.148878   55766 retry.go:31] will retry after 363.234938ms: kubelet not initialised
	I0927 18:28:42.519587   55766 retry.go:31] will retry after 422.568217ms: kubelet not initialised
	I0927 18:28:42.952306   55766 retry.go:31] will retry after 395.521674ms: kubelet not initialised
	I0927 18:28:43.352387   55766 retry.go:31] will retry after 1.082668431s: kubelet not initialised
	I0927 18:28:44.440462   55766 kubeadm.go:739] kubelet initialised
	I0927 18:28:44.440486   55766 kubeadm.go:740] duration metric: took 2.298473295s waiting for restarted kubelet to initialise ...
	I0927 18:28:44.440493   55766 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:28:44.444945   55766 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:44.450075   55766 pod_ready.go:98] node "test-preload-384202" hosting pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.450098   55766 pod_ready.go:82] duration metric: took 5.126769ms for pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace to be "Ready" ...
	E0927 18:28:44.450107   55766 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-384202" hosting pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.450113   55766 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:44.454844   55766 pod_ready.go:98] node "test-preload-384202" hosting pod "etcd-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.454868   55766 pod_ready.go:82] duration metric: took 4.74086ms for pod "etcd-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	E0927 18:28:44.454876   55766 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-384202" hosting pod "etcd-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.454886   55766 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:44.459076   55766 pod_ready.go:98] node "test-preload-384202" hosting pod "kube-apiserver-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.459102   55766 pod_ready.go:82] duration metric: took 4.210422ms for pod "kube-apiserver-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	E0927 18:28:44.459116   55766 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-384202" hosting pod "kube-apiserver-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.459122   55766 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:44.462749   55766 pod_ready.go:98] node "test-preload-384202" hosting pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.462768   55766 pod_ready.go:82] duration metric: took 3.636905ms for pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	E0927 18:28:44.462779   55766 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-384202" hosting pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.462785   55766 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rj49w" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:44.839874   55766 pod_ready.go:98] node "test-preload-384202" hosting pod "kube-proxy-rj49w" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.839901   55766 pod_ready.go:82] duration metric: took 377.101535ms for pod "kube-proxy-rj49w" in "kube-system" namespace to be "Ready" ...
	E0927 18:28:44.839910   55766 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-384202" hosting pod "kube-proxy-rj49w" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:44.839916   55766 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:45.239057   55766 pod_ready.go:98] node "test-preload-384202" hosting pod "kube-scheduler-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:45.239084   55766 pod_ready.go:82] duration metric: took 399.161915ms for pod "kube-scheduler-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	E0927 18:28:45.239099   55766 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-384202" hosting pod "kube-scheduler-test-preload-384202" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:45.239107   55766 pod_ready.go:39] duration metric: took 798.603587ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:28:45.239123   55766 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 18:28:45.251135   55766 ops.go:34] apiserver oom_adj: -16
	I0927 18:28:45.251155   55766 kubeadm.go:597] duration metric: took 31.516603311s to restartPrimaryControlPlane
	I0927 18:28:45.251164   55766 kubeadm.go:394] duration metric: took 31.563417737s to StartCluster
	I0927 18:28:45.251179   55766 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:28:45.251258   55766 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:28:45.251965   55766 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:28:45.252181   55766 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.165 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 18:28:45.252245   55766 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 18:28:45.252376   55766 addons.go:69] Setting storage-provisioner=true in profile "test-preload-384202"
	I0927 18:28:45.252395   55766 addons.go:234] Setting addon storage-provisioner=true in "test-preload-384202"
	I0927 18:28:45.252394   55766 addons.go:69] Setting default-storageclass=true in profile "test-preload-384202"
	W0927 18:28:45.252403   55766 addons.go:243] addon storage-provisioner should already be in state true
	I0927 18:28:45.252413   55766 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-384202"
	I0927 18:28:45.252430   55766 host.go:66] Checking if "test-preload-384202" exists ...
	I0927 18:28:45.252454   55766 config.go:182] Loaded profile config "test-preload-384202": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0927 18:28:45.252771   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:28:45.252773   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:28:45.252815   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:28:45.252836   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:28:45.253781   55766 out.go:177] * Verifying Kubernetes components...
	I0927 18:28:45.255148   55766 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:28:45.268013   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0927 18:28:45.268612   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42001
	I0927 18:28:45.268678   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:28:45.268959   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:28:45.269218   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:28:45.269238   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:28:45.269478   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:28:45.269516   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:28:45.269565   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:28:45.269739   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetState
	I0927 18:28:45.269848   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:28:45.270333   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:28:45.270371   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:28:45.272166   55766 kapi.go:59] client config for test-preload-384202: &rest.Config{Host:"https://192.168.39.165:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/client.crt", KeyFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/profiles/test-preload-384202/client.key", CAFile:"/home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f685c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0927 18:28:45.272383   55766 addons.go:234] Setting addon default-storageclass=true in "test-preload-384202"
	W0927 18:28:45.272395   55766 addons.go:243] addon default-storageclass should already be in state true
	I0927 18:28:45.272415   55766 host.go:66] Checking if "test-preload-384202" exists ...
	I0927 18:28:45.272661   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:28:45.272702   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:28:45.286623   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35215
	I0927 18:28:45.287122   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:28:45.287704   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:28:45.287729   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:28:45.288039   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:28:45.288196   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41931
	I0927 18:28:45.288540   55766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:28:45.288578   55766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:28:45.288653   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:28:45.289179   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:28:45.289203   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:28:45.289498   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:28:45.289666   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetState
	I0927 18:28:45.291382   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:28:45.293205   55766 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:28:45.294261   55766 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:28:45.294276   55766 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 18:28:45.294288   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:28:45.297497   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:28:45.297938   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:28:45.297967   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:28:45.298130   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:28:45.298308   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:28:45.298473   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:28:45.298682   55766 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa Username:docker}
	I0927 18:28:45.330841   55766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0927 18:28:45.331281   55766 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:28:45.331828   55766 main.go:141] libmachine: Using API Version  1
	I0927 18:28:45.331855   55766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:28:45.332191   55766 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:28:45.332368   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetState
	I0927 18:28:45.333816   55766 main.go:141] libmachine: (test-preload-384202) Calling .DriverName
	I0927 18:28:45.334022   55766 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 18:28:45.334041   55766 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 18:28:45.334059   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHHostname
	I0927 18:28:45.336629   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:28:45.336972   55766 main.go:141] libmachine: (test-preload-384202) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:68:ff", ip: ""} in network mk-test-preload-384202: {Iface:virbr1 ExpiryTime:2024-09-27 19:27:50 +0000 UTC Type:0 Mac:52:54:00:16:68:ff Iaid: IPaddr:192.168.39.165 Prefix:24 Hostname:test-preload-384202 Clientid:01:52:54:00:16:68:ff}
	I0927 18:28:45.337000   55766 main.go:141] libmachine: (test-preload-384202) DBG | domain test-preload-384202 has defined IP address 192.168.39.165 and MAC address 52:54:00:16:68:ff in network mk-test-preload-384202
	I0927 18:28:45.337134   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHPort
	I0927 18:28:45.337292   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHKeyPath
	I0927 18:28:45.337513   55766 main.go:141] libmachine: (test-preload-384202) Calling .GetSSHUsername
	I0927 18:28:45.337665   55766 sshutil.go:53] new ssh client: &{IP:192.168.39.165 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/test-preload-384202/id_rsa Username:docker}
	I0927 18:28:45.443003   55766 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:28:45.462608   55766 node_ready.go:35] waiting up to 6m0s for node "test-preload-384202" to be "Ready" ...
	I0927 18:28:45.519882   55766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:28:45.538820   55766 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:28:46.532101   55766 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.012177146s)
	I0927 18:28:46.532153   55766 main.go:141] libmachine: Making call to close driver server
	I0927 18:28:46.532167   55766 main.go:141] libmachine: (test-preload-384202) Calling .Close
	I0927 18:28:46.532154   55766 main.go:141] libmachine: Making call to close driver server
	I0927 18:28:46.532247   55766 main.go:141] libmachine: (test-preload-384202) Calling .Close
	I0927 18:28:46.532565   55766 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:28:46.532583   55766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:28:46.532592   55766 main.go:141] libmachine: Making call to close driver server
	I0927 18:28:46.532599   55766 main.go:141] libmachine: (test-preload-384202) Calling .Close
	I0927 18:28:46.532602   55766 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:28:46.532617   55766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:28:46.532633   55766 main.go:141] libmachine: Making call to close driver server
	I0927 18:28:46.532646   55766 main.go:141] libmachine: (test-preload-384202) Calling .Close
	I0927 18:28:46.532869   55766 main.go:141] libmachine: (test-preload-384202) DBG | Closing plugin on server side
	I0927 18:28:46.532883   55766 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:28:46.532896   55766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:28:46.532952   55766 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:28:46.532963   55766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:28:46.543510   55766 main.go:141] libmachine: Making call to close driver server
	I0927 18:28:46.543537   55766 main.go:141] libmachine: (test-preload-384202) Calling .Close
	I0927 18:28:46.543842   55766 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:28:46.543879   55766 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:28:46.545866   55766 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 18:28:46.547107   55766 addons.go:510] duration metric: took 1.294865814s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 18:28:47.466872   55766 node_ready.go:53] node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:49.967075   55766 node_ready.go:53] node "test-preload-384202" has status "Ready":"False"
	I0927 18:28:50.966669   55766 node_ready.go:49] node "test-preload-384202" has status "Ready":"True"
	I0927 18:28:50.966697   55766 node_ready.go:38] duration metric: took 5.50405886s for node "test-preload-384202" to be "Ready" ...
	I0927 18:28:50.966708   55766 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:28:50.972052   55766 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:50.978861   55766 pod_ready.go:93] pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace has status "Ready":"True"
	I0927 18:28:50.978889   55766 pod_ready.go:82] duration metric: took 6.810569ms for pod "coredns-6d4b75cb6d-scjdv" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:50.978898   55766 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:50.984117   55766 pod_ready.go:93] pod "etcd-test-preload-384202" in "kube-system" namespace has status "Ready":"True"
	I0927 18:28:50.984140   55766 pod_ready.go:82] duration metric: took 5.236135ms for pod "etcd-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:50.984149   55766 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:51.991213   55766 pod_ready.go:93] pod "kube-apiserver-test-preload-384202" in "kube-system" namespace has status "Ready":"True"
	I0927 18:28:51.991258   55766 pod_ready.go:82] duration metric: took 1.007102015s for pod "kube-apiserver-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:51.991269   55766 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:53.998073   55766 pod_ready.go:103] pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace has status "Ready":"False"
	I0927 18:28:55.499054   55766 pod_ready.go:93] pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace has status "Ready":"True"
	I0927 18:28:55.499077   55766 pod_ready.go:82] duration metric: took 3.507801875s for pod "kube-controller-manager-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:55.499093   55766 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rj49w" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:55.504873   55766 pod_ready.go:93] pod "kube-proxy-rj49w" in "kube-system" namespace has status "Ready":"True"
	I0927 18:28:55.504895   55766 pod_ready.go:82] duration metric: took 5.795971ms for pod "kube-proxy-rj49w" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:55.504903   55766 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:55.509014   55766 pod_ready.go:93] pod "kube-scheduler-test-preload-384202" in "kube-system" namespace has status "Ready":"True"
	I0927 18:28:55.509036   55766 pod_ready.go:82] duration metric: took 4.126748ms for pod "kube-scheduler-test-preload-384202" in "kube-system" namespace to be "Ready" ...
	I0927 18:28:55.509045   55766 pod_ready.go:39] duration metric: took 4.542328554s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:28:55.509058   55766 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:28:55.509127   55766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:28:55.524157   55766 api_server.go:72] duration metric: took 10.271949468s to wait for apiserver process to appear ...
	I0927 18:28:55.524186   55766 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:28:55.524208   55766 api_server.go:253] Checking apiserver healthz at https://192.168.39.165:8443/healthz ...
	I0927 18:28:55.529600   55766 api_server.go:279] https://192.168.39.165:8443/healthz returned 200:
	ok
	I0927 18:28:55.530457   55766 api_server.go:141] control plane version: v1.24.4
	I0927 18:28:55.530482   55766 api_server.go:131] duration metric: took 6.288522ms to wait for apiserver health ...
	I0927 18:28:55.530493   55766 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:28:55.569570   55766 system_pods.go:59] 7 kube-system pods found
	I0927 18:28:55.569608   55766 system_pods.go:61] "coredns-6d4b75cb6d-scjdv" [b3c7f0fc-beb5-4776-9dca-71bda146b4c5] Running
	I0927 18:28:55.569615   55766 system_pods.go:61] "etcd-test-preload-384202" [5178c150-b9cf-4d0b-aed8-e1cd77f62b36] Running
	I0927 18:28:55.569621   55766 system_pods.go:61] "kube-apiserver-test-preload-384202" [abf42838-6803-4c20-a4ec-9efbef213f49] Running
	I0927 18:28:55.569627   55766 system_pods.go:61] "kube-controller-manager-test-preload-384202" [469c69e3-9db2-4957-9c22-b15b178a778d] Running
	I0927 18:28:55.569631   55766 system_pods.go:61] "kube-proxy-rj49w" [699ee90a-2fd5-4277-939c-75fb7aa461d3] Running
	I0927 18:28:55.569642   55766 system_pods.go:61] "kube-scheduler-test-preload-384202" [ba64471b-da6a-4ee6-b503-8e72a54e18b2] Running
	I0927 18:28:55.569653   55766 system_pods.go:61] "storage-provisioner" [7ed32aea-3808-4dc9-a3db-ecd5694b227d] Running
	I0927 18:28:55.569662   55766 system_pods.go:74] duration metric: took 39.161572ms to wait for pod list to return data ...
	I0927 18:28:55.569675   55766 default_sa.go:34] waiting for default service account to be created ...
	I0927 18:28:55.767547   55766 default_sa.go:45] found service account: "default"
	I0927 18:28:55.767580   55766 default_sa.go:55] duration metric: took 197.896118ms for default service account to be created ...
	I0927 18:28:55.767589   55766 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 18:28:55.970522   55766 system_pods.go:86] 7 kube-system pods found
	I0927 18:28:55.970567   55766 system_pods.go:89] "coredns-6d4b75cb6d-scjdv" [b3c7f0fc-beb5-4776-9dca-71bda146b4c5] Running
	I0927 18:28:55.970577   55766 system_pods.go:89] "etcd-test-preload-384202" [5178c150-b9cf-4d0b-aed8-e1cd77f62b36] Running
	I0927 18:28:55.970585   55766 system_pods.go:89] "kube-apiserver-test-preload-384202" [abf42838-6803-4c20-a4ec-9efbef213f49] Running
	I0927 18:28:55.970599   55766 system_pods.go:89] "kube-controller-manager-test-preload-384202" [469c69e3-9db2-4957-9c22-b15b178a778d] Running
	I0927 18:28:55.970605   55766 system_pods.go:89] "kube-proxy-rj49w" [699ee90a-2fd5-4277-939c-75fb7aa461d3] Running
	I0927 18:28:55.970612   55766 system_pods.go:89] "kube-scheduler-test-preload-384202" [ba64471b-da6a-4ee6-b503-8e72a54e18b2] Running
	I0927 18:28:55.970619   55766 system_pods.go:89] "storage-provisioner" [7ed32aea-3808-4dc9-a3db-ecd5694b227d] Running
	I0927 18:28:55.970629   55766 system_pods.go:126] duration metric: took 203.033564ms to wait for k8s-apps to be running ...
	I0927 18:28:55.970658   55766 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 18:28:55.970722   55766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:28:55.985484   55766 system_svc.go:56] duration metric: took 14.83371ms WaitForService to wait for kubelet
	I0927 18:28:55.985517   55766 kubeadm.go:582] duration metric: took 10.733312777s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:28:55.985543   55766 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:28:56.167306   55766 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:28:56.167333   55766 node_conditions.go:123] node cpu capacity is 2
	I0927 18:28:56.167344   55766 node_conditions.go:105] duration metric: took 181.795121ms to run NodePressure ...
	I0927 18:28:56.167354   55766 start.go:241] waiting for startup goroutines ...
	I0927 18:28:56.167360   55766 start.go:246] waiting for cluster config update ...
	I0927 18:28:56.167370   55766 start.go:255] writing updated cluster config ...
	I0927 18:28:56.167623   55766 ssh_runner.go:195] Run: rm -f paused
	I0927 18:28:56.215011   55766 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I0927 18:28:56.216741   55766 out.go:201] 
	W0927 18:28:56.217854   55766 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0927 18:28:56.219159   55766 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0927 18:28:56.220503   55766 out.go:177] * Done! kubectl is now configured to use "test-preload-384202" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.088472207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461737088447526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e2472aa-ddb9-49c1-b403-3d16c57da31b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.088940268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=234333a6-11d5-4c22-8572-a72363997719 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.089007868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=234333a6-11d5-4c22-8572-a72363997719 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.089262437Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9c0ad146a0f9867f8eb2ffd948c177346badb46c8c09d8aaca3d7516b99d411,PodSandboxId:88593ec24c907b2834cfbb712ae2959e8bdd9effdba02f879bb31aef2a3264f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727461729596770437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-scjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c7f0fc-beb5-4776-9dca-71bda146b4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 731f914e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a368a91edb477745d8a29c6539dd752b27fdf521608331530edeb3e01640db,PodSandboxId:1427d71b6ae7dce054bd862748ac908d22147106c8e99d0ba6d752540b0ee8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727461722664914035,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj49w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 699ee90a-2fd5-4277-939c-75fb7aa461d3,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcad88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47275518d6838aeab3d021fb05013c14a43719ab925fde0f51a0d8723ab96e83,PodSandboxId:6f371c89e81d66e89970baf1a6707adcdd7e288f7bdfd839ddd7791fd2610f65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461722331222609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e
d32aea-3808-4dc9-a3db-ecd5694b227d,},Annotations:map[string]string{io.kubernetes.container.hash: bde06f52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9deddcc8f94b3bfe68215b543b6ab995018597cd0902cc9642d665c013d552ce,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727461721503761838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0351192a3d8ee8b5895ea0fdc91340bce16fc0884ffec446d988b82a0a2e14d5,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727461717484736545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:becad5f85a3a1b7ffec57a39326e1f0ed7220a0455ba7ca1df324cc1c8ba70e1,PodSandboxId:6dd12d3d69b6363fe953b55d1b23ecc0fc96ae1a88f4d6b03a45cae78caf65f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727461715703879415,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b65ebd6b793a9d339391034eac0a93,}
,Annotations:map[string]string{io.kubernetes.container.hash: 83bfc938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc15250d11106ea0d4dccb3aa19c38188ea7aef4cb17682252fc8a2c2db40cfe,PodSandboxId:2ec441d37ec520ad90ebc2eeba47cca8bc653ee9fe96055f4531b23ebab6d23e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727461695994026825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f0f5631e765d15cb8d51f7cc1da263,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2814893ad024c37e04a013de5580d23d5835e083a2291f0343b796b30ad7ca,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1727461695995833649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string
]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5457095213ccd2591eb7d1911d03fec3c84adf32c07bf53dec8c4a42c016d30,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1727461695945271887,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=234333a6-11d5-4c22-8572-a72363997719 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.125893058Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e69895e9-1303-4930-88b5-7b8404f0a33e name=/runtime.v1.RuntimeService/Version
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.125974674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e69895e9-1303-4930-88b5-7b8404f0a33e name=/runtime.v1.RuntimeService/Version
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.127218104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43113e2d-f819-4ddc-9810-0c04e2e8cad4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.127706855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461737127684383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43113e2d-f819-4ddc-9810-0c04e2e8cad4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.128174573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c823bdd-e810-4803-b7e9-116219d18b90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.128241624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c823bdd-e810-4803-b7e9-116219d18b90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.128467312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9c0ad146a0f9867f8eb2ffd948c177346badb46c8c09d8aaca3d7516b99d411,PodSandboxId:88593ec24c907b2834cfbb712ae2959e8bdd9effdba02f879bb31aef2a3264f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727461729596770437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-scjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c7f0fc-beb5-4776-9dca-71bda146b4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 731f914e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a368a91edb477745d8a29c6539dd752b27fdf521608331530edeb3e01640db,PodSandboxId:1427d71b6ae7dce054bd862748ac908d22147106c8e99d0ba6d752540b0ee8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727461722664914035,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj49w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 699ee90a-2fd5-4277-939c-75fb7aa461d3,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcad88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47275518d6838aeab3d021fb05013c14a43719ab925fde0f51a0d8723ab96e83,PodSandboxId:6f371c89e81d66e89970baf1a6707adcdd7e288f7bdfd839ddd7791fd2610f65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461722331222609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e
d32aea-3808-4dc9-a3db-ecd5694b227d,},Annotations:map[string]string{io.kubernetes.container.hash: bde06f52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9deddcc8f94b3bfe68215b543b6ab995018597cd0902cc9642d665c013d552ce,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727461721503761838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0351192a3d8ee8b5895ea0fdc91340bce16fc0884ffec446d988b82a0a2e14d5,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727461717484736545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:becad5f85a3a1b7ffec57a39326e1f0ed7220a0455ba7ca1df324cc1c8ba70e1,PodSandboxId:6dd12d3d69b6363fe953b55d1b23ecc0fc96ae1a88f4d6b03a45cae78caf65f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727461715703879415,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b65ebd6b793a9d339391034eac0a93,}
,Annotations:map[string]string{io.kubernetes.container.hash: 83bfc938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc15250d11106ea0d4dccb3aa19c38188ea7aef4cb17682252fc8a2c2db40cfe,PodSandboxId:2ec441d37ec520ad90ebc2eeba47cca8bc653ee9fe96055f4531b23ebab6d23e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727461695994026825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f0f5631e765d15cb8d51f7cc1da263,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2814893ad024c37e04a013de5580d23d5835e083a2291f0343b796b30ad7ca,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1727461695995833649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string
]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5457095213ccd2591eb7d1911d03fec3c84adf32c07bf53dec8c4a42c016d30,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1727461695945271887,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c823bdd-e810-4803-b7e9-116219d18b90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.167395965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e53e23c-bbdb-4c3c-ab1d-e81808c8fff8 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.167483947Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e53e23c-bbdb-4c3c-ab1d-e81808c8fff8 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.168537709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0f0327b-cee6-4369-999a-e914c2c4e19e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.168969035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461737168948652,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0f0327b-cee6-4369-999a-e914c2c4e19e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.169515535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59bf7743-bf17-4a72-9633-678e97c4b44f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.169574095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59bf7743-bf17-4a72-9633-678e97c4b44f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.169773915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9c0ad146a0f9867f8eb2ffd948c177346badb46c8c09d8aaca3d7516b99d411,PodSandboxId:88593ec24c907b2834cfbb712ae2959e8bdd9effdba02f879bb31aef2a3264f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727461729596770437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-scjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c7f0fc-beb5-4776-9dca-71bda146b4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 731f914e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a368a91edb477745d8a29c6539dd752b27fdf521608331530edeb3e01640db,PodSandboxId:1427d71b6ae7dce054bd862748ac908d22147106c8e99d0ba6d752540b0ee8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727461722664914035,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj49w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 699ee90a-2fd5-4277-939c-75fb7aa461d3,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcad88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47275518d6838aeab3d021fb05013c14a43719ab925fde0f51a0d8723ab96e83,PodSandboxId:6f371c89e81d66e89970baf1a6707adcdd7e288f7bdfd839ddd7791fd2610f65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461722331222609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e
d32aea-3808-4dc9-a3db-ecd5694b227d,},Annotations:map[string]string{io.kubernetes.container.hash: bde06f52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9deddcc8f94b3bfe68215b543b6ab995018597cd0902cc9642d665c013d552ce,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727461721503761838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0351192a3d8ee8b5895ea0fdc91340bce16fc0884ffec446d988b82a0a2e14d5,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727461717484736545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:becad5f85a3a1b7ffec57a39326e1f0ed7220a0455ba7ca1df324cc1c8ba70e1,PodSandboxId:6dd12d3d69b6363fe953b55d1b23ecc0fc96ae1a88f4d6b03a45cae78caf65f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727461715703879415,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b65ebd6b793a9d339391034eac0a93,}
,Annotations:map[string]string{io.kubernetes.container.hash: 83bfc938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc15250d11106ea0d4dccb3aa19c38188ea7aef4cb17682252fc8a2c2db40cfe,PodSandboxId:2ec441d37ec520ad90ebc2eeba47cca8bc653ee9fe96055f4531b23ebab6d23e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727461695994026825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f0f5631e765d15cb8d51f7cc1da263,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2814893ad024c37e04a013de5580d23d5835e083a2291f0343b796b30ad7ca,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1727461695995833649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string
]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5457095213ccd2591eb7d1911d03fec3c84adf32c07bf53dec8c4a42c016d30,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1727461695945271887,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59bf7743-bf17-4a72-9633-678e97c4b44f name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.208179834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4f8b16d-a6c2-460f-85df-89392968ebfb name=/runtime.v1.RuntimeService/Version
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.208262724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4f8b16d-a6c2-460f-85df-89392968ebfb name=/runtime.v1.RuntimeService/Version
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.209600470Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aea5dbe1-d515-4a42-bd70-93b70dfecfcd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.210044876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727461737210014538,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aea5dbe1-d515-4a42-bd70-93b70dfecfcd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.210585519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=663d382b-5483-43e4-a07a-28fc5587a632 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.210656483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=663d382b-5483-43e4-a07a-28fc5587a632 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:28:57 test-preload-384202 crio[675]: time="2024-09-27 18:28:57.210857679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e9c0ad146a0f9867f8eb2ffd948c177346badb46c8c09d8aaca3d7516b99d411,PodSandboxId:88593ec24c907b2834cfbb712ae2959e8bdd9effdba02f879bb31aef2a3264f3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1727461729596770437,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-scjdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c7f0fc-beb5-4776-9dca-71bda146b4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 731f914e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41a368a91edb477745d8a29c6539dd752b27fdf521608331530edeb3e01640db,PodSandboxId:1427d71b6ae7dce054bd862748ac908d22147106c8e99d0ba6d752540b0ee8e8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1727461722664914035,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rj49w,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 699ee90a-2fd5-4277-939c-75fb7aa461d3,},Annotations:map[string]string{io.kubernetes.container.hash: 46bcad88,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47275518d6838aeab3d021fb05013c14a43719ab925fde0f51a0d8723ab96e83,PodSandboxId:6f371c89e81d66e89970baf1a6707adcdd7e288f7bdfd839ddd7791fd2610f65,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727461722331222609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7e
d32aea-3808-4dc9-a3db-ecd5694b227d,},Annotations:map[string]string{io.kubernetes.container.hash: bde06f52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9deddcc8f94b3bfe68215b543b6ab995018597cd0902cc9642d665c013d552ce,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1727461721503761838,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0351192a3d8ee8b5895ea0fdc91340bce16fc0884ffec446d988b82a0a2e14d5,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1727461717484736545,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:becad5f85a3a1b7ffec57a39326e1f0ed7220a0455ba7ca1df324cc1c8ba70e1,PodSandboxId:6dd12d3d69b6363fe953b55d1b23ecc0fc96ae1a88f4d6b03a45cae78caf65f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1727461715703879415,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3b65ebd6b793a9d339391034eac0a93,}
,Annotations:map[string]string{io.kubernetes.container.hash: 83bfc938,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc15250d11106ea0d4dccb3aa19c38188ea7aef4cb17682252fc8a2c2db40cfe,PodSandboxId:2ec441d37ec520ad90ebc2eeba47cca8bc653ee9fe96055f4531b23ebab6d23e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1727461695994026825,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14f0f5631e765d15cb8d51f7cc1da263,},Annotation
s:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c2814893ad024c37e04a013de5580d23d5835e083a2291f0343b796b30ad7ca,PodSandboxId:03a95769114262b8f6569df7f72cdcd46a9bee9cf34fcd9e28c5f1288747e10d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1727461695995833649,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73d5c5dc2ab05d3b283e48da26b1820,},Annotations:map[string
]string{io.kubernetes.container.hash: c9d05ec3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5457095213ccd2591eb7d1911d03fec3c84adf32c07bf53dec8c4a42c016d30,PodSandboxId:bce8149dc58502ed9a17b026ee6f4b36349ff47bafc247d06c15ab32501b7de9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1727461695945271887,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-384202,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0b1a4029d20c4cd1091d5b2d241b2e2,},Annotat
ions:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=663d382b-5483-43e4-a07a-28fc5587a632 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9c0ad146a0f9       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   88593ec24c907       coredns-6d4b75cb6d-scjdv
	41a368a91edb4       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   1427d71b6ae7d       kube-proxy-rj49w
	47275518d6838       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   6f371c89e81d6       storage-provisioner
	9deddcc8f94b3       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   15 seconds ago      Running             kube-controller-manager   2                   bce8149dc5850       kube-controller-manager-test-preload-384202
	0351192a3d8ee       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            2                   03a9576911426       kube-apiserver-test-preload-384202
	becad5f85a3a1       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   6dd12d3d69b63       etcd-test-preload-384202
	4c2814893ad02       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   41 seconds ago      Exited              kube-apiserver            1                   03a9576911426       kube-apiserver-test-preload-384202
	fc15250d11106       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   41 seconds ago      Running             kube-scheduler            1                   2ec441d37ec52       kube-scheduler-test-preload-384202
	a5457095213cc       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   41 seconds ago      Exited              kube-controller-manager   1                   bce8149dc5850       kube-controller-manager-test-preload-384202
	
	
	==> coredns [e9c0ad146a0f9867f8eb2ffd948c177346badb46c8c09d8aaca3d7516b99d411] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:58890 - 33832 "HINFO IN 6398068258890614443.7583265729338247627. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015230404s
	
	
	==> describe nodes <==
	Name:               test-preload-384202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-384202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=test-preload-384202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T18_27_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:26:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-384202
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:28:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:28:50 +0000   Fri, 27 Sep 2024 18:26:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:28:50 +0000   Fri, 27 Sep 2024 18:26:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:28:50 +0000   Fri, 27 Sep 2024 18:26:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:28:50 +0000   Fri, 27 Sep 2024 18:28:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.165
	  Hostname:    test-preload-384202
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 133a1ae0ef4747768ce0661258d6a2cf
	  System UUID:                133a1ae0-ef47-4776-8ce0-661258d6a2cf
	  Boot ID:                    7daf204a-b8a1-4fc8-be38-6d522b5a0b9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-scjdv                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     103s
	  kube-system                 etcd-test-preload-384202                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         116s
	  kube-system                 kube-apiserver-test-preload-384202             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-test-preload-384202    200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-rj49w                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-test-preload-384202             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x5 over 2m4s)  kubelet          Node test-preload-384202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s (x4 over 2m4s)  kubelet          Node test-preload-384202 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s (x5 over 2m4s)  kubelet          Node test-preload-384202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s                 kubelet          Node test-preload-384202 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  116s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  116s                 kubelet          Node test-preload-384202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s                 kubelet          Node test-preload-384202 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                106s                 kubelet          Node test-preload-384202 status is now: NodeReady
	  Normal  RegisteredNode           104s                 node-controller  Node test-preload-384202 event: Registered Node test-preload-384202 in Controller
	  Normal  Starting                 42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)    kubelet          Node test-preload-384202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)    kubelet          Node test-preload-384202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x7 over 42s)    kubelet          Node test-preload-384202 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-384202 event: Registered Node test-preload-384202 in Controller
	
	
	==> dmesg <==
	[Sep27 18:27] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048799] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035839] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.804576] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.930141] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.410390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.923850] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.059321] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057391] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.168109] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.143181] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.270394] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Sep27 18:28] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.062200] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.918827] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[  +6.423745] kauditd_printk_skb: 95 callbacks suppressed
	[ +20.028214] kauditd_printk_skb: 5 callbacks suppressed
	[  +3.763086] systemd-fstab-generator[1874]: Ignoring "noauto" option for root device
	[  +4.104788] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [becad5f85a3a1b7ffec57a39326e1f0ed7220a0455ba7ca1df324cc1c8ba70e1] <==
	{"level":"info","ts":"2024-09-27T18:28:35.830Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ffc3b7517aaad9f6","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-27T18:28:35.831Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ffc3b7517aaad9f6","initial-advertise-peer-urls":["https://192.168.39.165:2380"],"listen-peer-urls":["https://192.168.39.165:2380"],"advertise-client-urls":["https://192.168.39.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.165:2380"}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 switched to configuration voters=(18429775660708452854)"}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","added-peer-id":"ffc3b7517aaad9f6","added-peer-peer-urls":["https://192.168.39.165:2380"]}
	{"level":"info","ts":"2024-09-27T18:28:35.832Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"58f0a6b9f17e1f60","local-member-id":"ffc3b7517aaad9f6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:28:35.833Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgPreVoteResp from ffc3b7517aaad9f6 at term 2"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 received MsgVoteResp from ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffc3b7517aaad9f6 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T18:28:36.818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffc3b7517aaad9f6 elected leader ffc3b7517aaad9f6 at term 3"}
	{"level":"info","ts":"2024-09-27T18:28:36.820Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ffc3b7517aaad9f6","local-member-attributes":"{Name:test-preload-384202 ClientURLs:[https://192.168.39.165:2379]}","request-path":"/0/members/ffc3b7517aaad9f6/attributes","cluster-id":"58f0a6b9f17e1f60","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:28:36.820Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:28:36.821Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:28:36.822Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T18:28:36.823Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.165:2379"}
	{"level":"info","ts":"2024-09-27T18:28:36.823Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T18:28:36.823Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:28:57 up 1 min,  0 users,  load average: 0.43, 0.13, 0.04
	Linux test-preload-384202 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0351192a3d8ee8b5895ea0fdc91340bce16fc0884ffec446d988b82a0a2e14d5] <==
	I0927 18:28:40.489344       1 controller.go:85] Starting OpenAPI controller
	I0927 18:28:40.489361       1 controller.go:85] Starting OpenAPI V3 controller
	I0927 18:28:40.489392       1 naming_controller.go:291] Starting NamingConditionController
	I0927 18:28:40.489415       1 establishing_controller.go:76] Starting EstablishingController
	I0927 18:28:40.489435       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0927 18:28:40.489447       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0927 18:28:40.489625       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0927 18:28:40.578058       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0927 18:28:40.583886       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 18:28:40.604602       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0927 18:28:40.606858       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 18:28:40.606863       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0927 18:28:40.650584       1 cache.go:39] Caches are synced for autoregister controller
	I0927 18:28:40.653196       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0927 18:28:40.655555       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0927 18:28:41.049347       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0927 18:28:41.378216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 18:28:42.035048       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0927 18:28:42.055439       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0927 18:28:42.096667       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0927 18:28:42.115358       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:28:42.124700       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:28:43.057349       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0927 18:28:53.712280       1 controller.go:611] quota admission added evaluator for: endpoints
	I0927 18:28:53.842012       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [4c2814893ad024c37e04a013de5580d23d5835e083a2291f0343b796b30ad7ca] <==
	I0927 18:28:16.871179       1 server.go:558] external host was not specified, using 192.168.39.165
	I0927 18:28:16.876153       1 server.go:158] Version: v1.24.4
	I0927 18:28:16.876187       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:28:17.405983       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0927 18:28:17.407640       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 18:28:17.407666       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0927 18:28:17.409069       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0927 18:28:17.409086       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0927 18:28:17.413236       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:18.404174       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:18.414370       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:19.404606       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:20.157624       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:21.175074       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:22.363364       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:23.759261       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:26.640220       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:27.888841       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:32.337405       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0927 18:28:35.182933       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0927 18:28:37.412935       1 run.go:74] "command failed" err="context deadline exceeded"
	
	
	==> kube-controller-manager [9deddcc8f94b3bfe68215b543b6ab995018597cd0902cc9642d665c013d552ce] <==
	I0927 18:28:53.658719       1 disruption.go:371] Sending events to api server.
	I0927 18:28:53.698267       1 shared_informer.go:262] Caches are synced for endpoint
	I0927 18:28:53.707174       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	W0927 18:28:53.735527       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="test-preload-384202" does not exist
	I0927 18:28:53.764055       1 shared_informer.go:262] Caches are synced for daemon sets
	I0927 18:28:53.772560       1 shared_informer.go:262] Caches are synced for taint
	I0927 18:28:53.772561       1 shared_informer.go:262] Caches are synced for persistent volume
	I0927 18:28:53.772841       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0927 18:28:53.772981       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-384202. Assuming now as a timestamp.
	I0927 18:28:53.773063       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0927 18:28:53.773143       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0927 18:28:53.773498       1 event.go:294] "Event occurred" object="test-preload-384202" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-384202 event: Registered Node test-preload-384202 in Controller"
	I0927 18:28:53.789548       1 shared_informer.go:262] Caches are synced for node
	I0927 18:28:53.789683       1 range_allocator.go:173] Starting range CIDR allocator
	I0927 18:28:53.789709       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0927 18:28:53.789777       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0927 18:28:53.813202       1 shared_informer.go:262] Caches are synced for resource quota
	I0927 18:28:53.824543       1 shared_informer.go:262] Caches are synced for TTL
	I0927 18:28:53.827002       1 shared_informer.go:262] Caches are synced for attach detach
	I0927 18:28:53.828902       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0927 18:28:53.832085       1 shared_informer.go:262] Caches are synced for resource quota
	I0927 18:28:53.833467       1 shared_informer.go:262] Caches are synced for GC
	I0927 18:28:54.242508       1 shared_informer.go:262] Caches are synced for garbage collector
	I0927 18:28:54.242557       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0927 18:28:54.273605       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [a5457095213ccd2591eb7d1911d03fec3c84adf32c07bf53dec8c4a42c016d30] <==
		/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc00026e700, {0x4d02200?, 0xc000a36090}, 0x946?)
		/usr/local/go/src/crypto/tls/conn.go:807 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc00026e700, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:614 +0x116
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:582
	crypto/tls.(*Conn).Read(0xc00026e700, {0xc000f56000, 0x1000, 0x91a200?})
		/usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
	bufio.(*Reader).Read(0xc000023da0, {0xc0003d7380, 0x9, 0x936b82?})
		/usr/local/go/src/bufio/bufio.go:236 +0x1b4
	io.ReadAtLeast({0x4cf9b00, 0xc000023da0}, {0xc0003d7380, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:331 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:350
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0003d7380?, 0x9?, 0xc001e79a10?}, {0x4cf9b00?, 0xc000023da0?})
		vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0003d7340)
		vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000f1ff98)
		vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000393380)
		vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		vendor/golang.org/x/net/http2/transport.go:725 +0xa65
	
	
	==> kube-proxy [41a368a91edb477745d8a29c6539dd752b27fdf521608331530edeb3e01640db] <==
	I0927 18:28:42.934207       1 node.go:163] Successfully retrieved node IP: 192.168.39.165
	I0927 18:28:42.934853       1 server_others.go:138] "Detected node IP" address="192.168.39.165"
	I0927 18:28:42.934994       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0927 18:28:43.040420       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0927 18:28:43.040504       1 server_others.go:206] "Using iptables Proxier"
	I0927 18:28:43.040551       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0927 18:28:43.041631       1 server.go:661] "Version info" version="v1.24.4"
	I0927 18:28:43.041692       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:28:43.046350       1 config.go:317] "Starting service config controller"
	I0927 18:28:43.047229       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0927 18:28:43.047346       1 config.go:226] "Starting endpoint slice config controller"
	I0927 18:28:43.047368       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0927 18:28:43.051141       1 config.go:444] "Starting node config controller"
	I0927 18:28:43.051197       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0927 18:28:43.147483       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0927 18:28:43.147552       1 shared_informer.go:262] Caches are synced for service config
	I0927 18:28:43.151318       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [fc15250d11106ea0d4dccb3aa19c38188ea7aef4cb17682252fc8a2c2db40cfe] <==
	W0927 18:28:40.553060       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0927 18:28:40.553546       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0927 18:28:40.553754       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 18:28:40.553828       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0927 18:28:40.553966       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0927 18:28:40.556252       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0927 18:28:40.556263       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0927 18:28:40.556464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0927 18:28:40.555976       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 18:28:40.556552       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0927 18:28:40.556006       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 18:28:40.556639       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0927 18:28:40.556032       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 18:28:40.556685       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0927 18:28:40.556094       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 18:28:40.556799       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0927 18:28:40.556203       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0927 18:28:40.556888       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0927 18:28:40.556312       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 18:28:40.556932       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0927 18:28:40.555926       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 18:28:40.557022       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0927 18:28:40.558945       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 18:28:40.558996       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0927 18:28:42.024190       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.474417    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/699ee90a-2fd5-4277-939c-75fb7aa461d3-xtables-lock\") pod \"kube-proxy-rj49w\" (UID: \"699ee90a-2fd5-4277-939c-75fb7aa461d3\") " pod="kube-system/kube-proxy-rj49w"
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.474437    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x64rm\" (UniqueName: \"kubernetes.io/projected/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-kube-api-access-x64rm\") pod \"coredns-6d4b75cb6d-scjdv\" (UID: \"b3c7f0fc-beb5-4776-9dca-71bda146b4c5\") " pod="kube-system/coredns-6d4b75cb6d-scjdv"
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.474456    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/699ee90a-2fd5-4277-939c-75fb7aa461d3-lib-modules\") pod \"kube-proxy-rj49w\" (UID: \"699ee90a-2fd5-4277-939c-75fb7aa461d3\") " pod="kube-system/kube-proxy-rj49w"
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.474475    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwfdv\" (UniqueName: \"kubernetes.io/projected/699ee90a-2fd5-4277-939c-75fb7aa461d3-kube-api-access-cwfdv\") pod \"kube-proxy-rj49w\" (UID: \"699ee90a-2fd5-4277-939c-75fb7aa461d3\") " pod="kube-system/kube-proxy-rj49w"
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.474499    1143 reconciler.go:159] "Reconciler: start to sync state"
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.490641    1143 scope.go:110] "RemoveContainer" containerID="a5457095213ccd2591eb7d1911d03fec3c84adf32c07bf53dec8c4a42c016d30"
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.834165    1143 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8bz8\" (UniqueName: \"kubernetes.io/projected/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b-kube-api-access-z8bz8\") pod \"a24d740b-e7a1-43af-a4ca-a55c3ec7b80b\" (UID: \"a24d740b-e7a1-43af-a4ca-a55c3ec7b80b\") "
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.834641    1143 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b-config-volume\") pod \"a24d740b-e7a1-43af-a4ca-a55c3ec7b80b\" (UID: \"a24d740b-e7a1-43af-a4ca-a55c3ec7b80b\") "
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: E0927 18:28:41.836036    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: E0927 18:28:41.836166    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume podName:b3c7f0fc-beb5-4776-9dca-71bda146b4c5 nodeName:}" failed. No retries permitted until 2024-09-27 18:28:42.336132103 +0000 UTC m=+27.163080589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume") pod "coredns-6d4b75cb6d-scjdv" (UID: "b3c7f0fc-beb5-4776-9dca-71bda146b4c5") : object "kube-system"/"coredns" not registered
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: W0927 18:28:41.836763    1143 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b/volumes/kubernetes.io~projected/kube-api-access-z8bz8: clearQuota called, but quotas disabled
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.837184    1143 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b-kube-api-access-z8bz8" (OuterVolumeSpecName: "kube-api-access-z8bz8") pod "a24d740b-e7a1-43af-a4ca-a55c3ec7b80b" (UID: "a24d740b-e7a1-43af-a4ca-a55c3ec7b80b"). InnerVolumeSpecName "kube-api-access-z8bz8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: W0927 18:28:41.837412    1143 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.838013    1143 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a24d740b-e7a1-43af-a4ca-a55c3ec7b80b" (UID: "a24d740b-e7a1-43af-a4ca-a55c3ec7b80b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.935206    1143 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b-config-volume\") on node \"test-preload-384202\" DevicePath \"\""
	Sep 27 18:28:41 test-preload-384202 kubelet[1143]: I0927 18:28:41.935249    1143 reconciler.go:384] "Volume detached for volume \"kube-api-access-z8bz8\" (UniqueName: \"kubernetes.io/projected/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b-kube-api-access-z8bz8\") on node \"test-preload-384202\" DevicePath \"\""
	Sep 27 18:28:42 test-preload-384202 kubelet[1143]: E0927 18:28:42.337554    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 27 18:28:42 test-preload-384202 kubelet[1143]: E0927 18:28:42.337624    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume podName:b3c7f0fc-beb5-4776-9dca-71bda146b4c5 nodeName:}" failed. No retries permitted until 2024-09-27 18:28:43.337608503 +0000 UTC m=+28.164556996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume") pod "coredns-6d4b75cb6d-scjdv" (UID: "b3c7f0fc-beb5-4776-9dca-71bda146b4c5") : object "kube-system"/"coredns" not registered
	Sep 27 18:28:42 test-preload-384202 kubelet[1143]: E0927 18:28:42.397378    1143 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-scjdv" podUID=b3c7f0fc-beb5-4776-9dca-71bda146b4c5
	Sep 27 18:28:43 test-preload-384202 kubelet[1143]: E0927 18:28:43.344984    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 27 18:28:43 test-preload-384202 kubelet[1143]: E0927 18:28:43.345054    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume podName:b3c7f0fc-beb5-4776-9dca-71bda146b4c5 nodeName:}" failed. No retries permitted until 2024-09-27 18:28:45.345039646 +0000 UTC m=+30.171988126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume") pod "coredns-6d4b75cb6d-scjdv" (UID: "b3c7f0fc-beb5-4776-9dca-71bda146b4c5") : object "kube-system"/"coredns" not registered
	Sep 27 18:28:43 test-preload-384202 kubelet[1143]: I0927 18:28:43.402020    1143 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=a24d740b-e7a1-43af-a4ca-a55c3ec7b80b path="/var/lib/kubelet/pods/a24d740b-e7a1-43af-a4ca-a55c3ec7b80b/volumes"
	Sep 27 18:28:44 test-preload-384202 kubelet[1143]: E0927 18:28:44.397530    1143 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-scjdv" podUID=b3c7f0fc-beb5-4776-9dca-71bda146b4c5
	Sep 27 18:28:45 test-preload-384202 kubelet[1143]: E0927 18:28:45.360646    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 27 18:28:45 test-preload-384202 kubelet[1143]: E0927 18:28:45.360727    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume podName:b3c7f0fc-beb5-4776-9dca-71bda146b4c5 nodeName:}" failed. No retries permitted until 2024-09-27 18:28:49.360707079 +0000 UTC m=+34.187655572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b3c7f0fc-beb5-4776-9dca-71bda146b4c5-config-volume") pod "coredns-6d4b75cb6d-scjdv" (UID: "b3c7f0fc-beb5-4776-9dca-71bda146b4c5") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [47275518d6838aeab3d021fb05013c14a43719ab925fde0f51a0d8723ab96e83] <==
	I0927 18:28:42.431895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-384202 -n test-preload-384202
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-384202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-384202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-384202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-384202: (1.142314416s)
--- FAIL: TestPreload (190.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.36s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m41.631287378s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-477684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-477684" primary control-plane node in "kubernetes-upgrade-477684" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:33:15.349670   59538 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:33:15.349810   59538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:33:15.349820   59538 out.go:358] Setting ErrFile to fd 2...
	I0927 18:33:15.349825   59538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:33:15.350023   59538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:33:15.350832   59538 out.go:352] Setting JSON to false
	I0927 18:33:15.352056   59538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8140,"bootTime":1727453855,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:33:15.352193   59538 start.go:139] virtualization: kvm guest
	I0927 18:33:15.354432   59538 out.go:177] * [kubernetes-upgrade-477684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:33:15.355792   59538 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:33:15.355797   59538 notify.go:220] Checking for updates...
	I0927 18:33:15.357329   59538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:33:15.358785   59538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:33:15.360585   59538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:33:15.362380   59538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:33:15.363897   59538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:33:15.365635   59538 config.go:182] Loaded profile config "NoKubernetes-634967": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0927 18:33:15.365769   59538 config.go:182] Loaded profile config "cert-expiration-784714": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:33:15.365870   59538 config.go:182] Loaded profile config "running-upgrade-158112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0927 18:33:15.365966   59538 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:33:15.401980   59538 out.go:177] * Using the kvm2 driver based on user configuration
	I0927 18:33:15.403415   59538 start.go:297] selected driver: kvm2
	I0927 18:33:15.403436   59538 start.go:901] validating driver "kvm2" against <nil>
	I0927 18:33:15.403452   59538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:33:15.404175   59538 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:33:15.404280   59538 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:33:15.420884   59538 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:33:15.420952   59538 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 18:33:15.421203   59538 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 18:33:15.421230   59538 cni.go:84] Creating CNI manager for ""
	I0927 18:33:15.421282   59538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:33:15.421293   59538 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 18:33:15.421379   59538 start.go:340] cluster config:
	{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:33:15.421521   59538 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:33:15.424224   59538 out.go:177] * Starting "kubernetes-upgrade-477684" primary control-plane node in "kubernetes-upgrade-477684" cluster
	I0927 18:33:15.425434   59538 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 18:33:15.425474   59538 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 18:33:15.425481   59538 cache.go:56] Caching tarball of preloaded images
	I0927 18:33:15.425569   59538 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:33:15.425578   59538 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0927 18:33:15.425666   59538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json ...
	I0927 18:33:15.425683   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json: {Name:mkee59e7c7924d100b98c0190cb031b4aa43312d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:15.425819   59538 start.go:360] acquireMachinesLock for kubernetes-upgrade-477684: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:33:22.932906   59538 start.go:364] duration metric: took 7.507063132s to acquireMachinesLock for "kubernetes-upgrade-477684"
	I0927 18:33:22.932972   59538 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 18:33:22.933095   59538 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 18:33:22.935322   59538 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0927 18:33:22.935506   59538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:33:22.935579   59538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:33:22.955293   59538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42763
	I0927 18:33:22.955895   59538 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:33:22.956574   59538 main.go:141] libmachine: Using API Version  1
	I0927 18:33:22.956590   59538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:33:22.957031   59538 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:33:22.957180   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:33:22.957303   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:22.957422   59538 start.go:159] libmachine.API.Create for "kubernetes-upgrade-477684" (driver="kvm2")
	I0927 18:33:22.957448   59538 client.go:168] LocalClient.Create starting
	I0927 18:33:22.957485   59538 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 18:33:22.957526   59538 main.go:141] libmachine: Decoding PEM data...
	I0927 18:33:22.957548   59538 main.go:141] libmachine: Parsing certificate...
	I0927 18:33:22.957602   59538 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 18:33:22.957622   59538 main.go:141] libmachine: Decoding PEM data...
	I0927 18:33:22.957636   59538 main.go:141] libmachine: Parsing certificate...
	I0927 18:33:22.957652   59538 main.go:141] libmachine: Running pre-create checks...
	I0927 18:33:22.957659   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .PreCreateCheck
	I0927 18:33:22.958010   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetConfigRaw
	I0927 18:33:22.958433   59538 main.go:141] libmachine: Creating machine...
	I0927 18:33:22.958447   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .Create
	I0927 18:33:22.958727   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Creating KVM machine...
	I0927 18:33:22.959866   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found existing default KVM network
	I0927 18:33:22.961093   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:22.960952   59606 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:cb:f6:c4} reservation:<nil>}
	I0927 18:33:22.962415   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:22.962175   59606 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ed0}
	I0927 18:33:22.962431   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | created network xml: 
	I0927 18:33:22.962443   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | <network>
	I0927 18:33:22.962452   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |   <name>mk-kubernetes-upgrade-477684</name>
	I0927 18:33:22.962463   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |   <dns enable='no'/>
	I0927 18:33:22.962470   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |   
	I0927 18:33:22.962481   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0927 18:33:22.962493   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |     <dhcp>
	I0927 18:33:22.962504   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0927 18:33:22.962512   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |     </dhcp>
	I0927 18:33:22.962520   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |   </ip>
	I0927 18:33:22.962538   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG |   
	I0927 18:33:22.962547   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | </network>
	I0927 18:33:22.962553   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | 
	I0927 18:33:22.968749   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | trying to create private KVM network mk-kubernetes-upgrade-477684 192.168.50.0/24...
	I0927 18:33:23.055426   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | private KVM network mk-kubernetes-upgrade-477684 192.168.50.0/24 created
	I0927 18:33:23.055463   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684 ...
	I0927 18:33:23.055477   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:23.055427   59606 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:33:23.055495   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 18:33:23.055611   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 18:33:23.387071   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:23.386934   59606 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa...
	I0927 18:33:23.488673   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:23.488541   59606 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/kubernetes-upgrade-477684.rawdisk...
	I0927 18:33:23.488700   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Writing magic tar header
	I0927 18:33:23.488712   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Writing SSH key tar header
	I0927 18:33:23.488720   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:23.488665   59606 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684 ...
	I0927 18:33:23.488801   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684
	I0927 18:33:23.488823   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 18:33:23.488854   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684 (perms=drwx------)
	I0927 18:33:23.488866   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:33:23.488874   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 18:33:23.488881   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 18:33:23.488893   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home/jenkins
	I0927 18:33:23.488903   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Checking permissions on dir: /home
	I0927 18:33:23.488915   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Skipping /home - not owner
	I0927 18:33:23.488930   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 18:33:23.488966   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 18:33:23.488990   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 18:33:23.489014   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 18:33:23.489027   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 18:33:23.489042   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Creating domain...
	I0927 18:33:23.490053   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) define libvirt domain using xml: 
	I0927 18:33:23.490076   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) <domain type='kvm'>
	I0927 18:33:23.490106   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <name>kubernetes-upgrade-477684</name>
	I0927 18:33:23.490128   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <memory unit='MiB'>2200</memory>
	I0927 18:33:23.490140   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <vcpu>2</vcpu>
	I0927 18:33:23.490150   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <features>
	I0927 18:33:23.490159   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <acpi/>
	I0927 18:33:23.490168   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <apic/>
	I0927 18:33:23.490185   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <pae/>
	I0927 18:33:23.490195   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     
	I0927 18:33:23.490204   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   </features>
	I0927 18:33:23.490218   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <cpu mode='host-passthrough'>
	I0927 18:33:23.490237   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   
	I0927 18:33:23.490247   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   </cpu>
	I0927 18:33:23.490257   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <os>
	I0927 18:33:23.490263   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <type>hvm</type>
	I0927 18:33:23.490293   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <boot dev='cdrom'/>
	I0927 18:33:23.490314   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <boot dev='hd'/>
	I0927 18:33:23.490325   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <bootmenu enable='no'/>
	I0927 18:33:23.490333   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   </os>
	I0927 18:33:23.490344   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   <devices>
	I0927 18:33:23.490355   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <disk type='file' device='cdrom'>
	I0927 18:33:23.490371   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/boot2docker.iso'/>
	I0927 18:33:23.490381   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <target dev='hdc' bus='scsi'/>
	I0927 18:33:23.490390   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <readonly/>
	I0927 18:33:23.490406   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </disk>
	I0927 18:33:23.490418   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <disk type='file' device='disk'>
	I0927 18:33:23.490431   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 18:33:23.490456   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/kubernetes-upgrade-477684.rawdisk'/>
	I0927 18:33:23.490466   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <target dev='hda' bus='virtio'/>
	I0927 18:33:23.490496   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </disk>
	I0927 18:33:23.490521   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <interface type='network'>
	I0927 18:33:23.490552   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <source network='mk-kubernetes-upgrade-477684'/>
	I0927 18:33:23.490567   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <model type='virtio'/>
	I0927 18:33:23.490578   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </interface>
	I0927 18:33:23.490594   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <interface type='network'>
	I0927 18:33:23.490618   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <source network='default'/>
	I0927 18:33:23.490634   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <model type='virtio'/>
	I0927 18:33:23.490680   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </interface>
	I0927 18:33:23.490702   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <serial type='pty'>
	I0927 18:33:23.490716   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <target port='0'/>
	I0927 18:33:23.490728   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </serial>
	I0927 18:33:23.490741   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <console type='pty'>
	I0927 18:33:23.490753   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <target type='serial' port='0'/>
	I0927 18:33:23.490766   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </console>
	I0927 18:33:23.490777   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     <rng model='virtio'>
	I0927 18:33:23.490791   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)       <backend model='random'>/dev/random</backend>
	I0927 18:33:23.490806   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     </rng>
	I0927 18:33:23.490836   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     
	I0927 18:33:23.490854   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)     
	I0927 18:33:23.490866   59538 main.go:141] libmachine: (kubernetes-upgrade-477684)   </devices>
	I0927 18:33:23.490875   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) </domain>
	I0927 18:33:23.490889   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) 
	I0927 18:33:23.495099   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:ab:1a:f9 in network default
	I0927 18:33:23.495665   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring networks are active...
	I0927 18:33:23.495682   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:23.496490   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring network default is active
	I0927 18:33:23.496792   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring network mk-kubernetes-upgrade-477684 is active
	I0927 18:33:23.497330   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Getting domain xml...
	I0927 18:33:23.498221   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Creating domain...
	I0927 18:33:24.931202   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Waiting to get IP...
	I0927 18:33:24.932123   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:24.932648   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:24.932690   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:24.932631   59606 retry.go:31] will retry after 272.768392ms: waiting for machine to come up
	I0927 18:33:25.207106   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:25.207653   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:25.207675   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:25.207609   59606 retry.go:31] will retry after 368.123282ms: waiting for machine to come up
	I0927 18:33:25.577206   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:25.577867   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:25.577979   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:25.577921   59606 retry.go:31] will retry after 347.047982ms: waiting for machine to come up
	I0927 18:33:25.926882   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:25.927388   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:25.927412   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:25.927344   59606 retry.go:31] will retry after 608.789119ms: waiting for machine to come up
	I0927 18:33:26.768598   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:26.769080   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:26.769146   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:26.769086   59606 retry.go:31] will retry after 578.789144ms: waiting for machine to come up
	I0927 18:33:27.349945   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:27.350514   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:27.350549   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:27.350455   59606 retry.go:31] will retry after 573.551814ms: waiting for machine to come up
	I0927 18:33:27.925160   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:27.925701   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:27.925724   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:27.925653   59606 retry.go:31] will retry after 969.92792ms: waiting for machine to come up
	I0927 18:33:28.898444   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:28.898835   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:28.898871   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:28.898772   59606 retry.go:31] will retry after 1.020609132s: waiting for machine to come up
	I0927 18:33:29.920684   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:29.921181   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:29.921206   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:29.921151   59606 retry.go:31] will retry after 1.545863763s: waiting for machine to come up
	I0927 18:33:31.468950   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:31.469478   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:31.469500   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:31.469439   59606 retry.go:31] will retry after 1.792330236s: waiting for machine to come up
	I0927 18:33:33.263311   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:33.263891   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:33.263944   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:33.263863   59606 retry.go:31] will retry after 2.554416202s: waiting for machine to come up
	I0927 18:33:35.819949   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:35.820453   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:35.820492   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:35.820389   59606 retry.go:31] will retry after 3.590567473s: waiting for machine to come up
	I0927 18:33:39.413400   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:39.413874   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:39.413896   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:39.413836   59606 retry.go:31] will retry after 2.959641306s: waiting for machine to come up
	I0927 18:33:42.375854   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:42.376482   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:33:42.376512   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:33:42.376421   59606 retry.go:31] will retry after 3.878479262s: waiting for machine to come up
	I0927 18:33:46.256059   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:46.256633   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Found IP for machine: 192.168.50.36
	I0927 18:33:46.256659   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Reserving static IP address...
	I0927 18:33:46.256675   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has current primary IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:46.256994   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-477684", mac: "52:54:00:3f:58:c1", ip: "192.168.50.36"} in network mk-kubernetes-upgrade-477684
	I0927 18:33:46.339101   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Getting to WaitForSSH function...
	I0927 18:33:46.339129   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Reserved static IP address: 192.168.50.36
	I0927 18:33:46.339192   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Waiting for SSH to be available...
	I0927 18:33:46.342076   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:46.342509   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684
	I0927 18:33:46.342536   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-477684 interface with MAC address 52:54:00:3f:58:c1
	I0927 18:33:46.342704   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using SSH client type: external
	I0927 18:33:46.342735   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa (-rw-------)
	I0927 18:33:46.342764   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:33:46.342787   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | About to run SSH command:
	I0927 18:33:46.342810   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | exit 0
	I0927 18:33:46.346601   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | SSH cmd err, output: exit status 255: 
	I0927 18:33:46.346619   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 18:33:46.346627   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | command : exit 0
	I0927 18:33:46.346635   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | err     : exit status 255
	I0927 18:33:46.346655   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | output  : 
	I0927 18:33:49.347819   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Getting to WaitForSSH function...
	I0927 18:33:49.350461   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.350807   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.350837   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.350978   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using SSH client type: external
	I0927 18:33:49.350998   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa (-rw-------)
	I0927 18:33:49.351016   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:33:49.351047   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | About to run SSH command:
	I0927 18:33:49.351058   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | exit 0
	I0927 18:33:49.478914   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | SSH cmd err, output: <nil>: 
	I0927 18:33:49.479144   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) KVM machine creation complete!
	I0927 18:33:49.479440   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetConfigRaw
	I0927 18:33:49.479988   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:49.480210   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:49.480497   59538 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 18:33:49.480515   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetState
	I0927 18:33:49.482043   59538 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 18:33:49.482056   59538 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 18:33:49.482061   59538 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 18:33:49.482066   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:49.484762   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.485100   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.485127   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.485327   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:49.485517   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.485662   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.485802   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:49.485987   59538 main.go:141] libmachine: Using SSH client type: native
	I0927 18:33:49.486188   59538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:33:49.486207   59538 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 18:33:49.597937   59538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:33:49.597957   59538 main.go:141] libmachine: Detecting the provisioner...
	I0927 18:33:49.597964   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:49.600707   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.601129   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.601147   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.601427   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:49.601652   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.601809   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.601944   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:49.602096   59538 main.go:141] libmachine: Using SSH client type: native
	I0927 18:33:49.602317   59538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:33:49.602330   59538 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 18:33:49.715292   59538 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 18:33:49.715375   59538 main.go:141] libmachine: found compatible host: buildroot
	I0927 18:33:49.715399   59538 main.go:141] libmachine: Provisioning with buildroot...
	I0927 18:33:49.715410   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:33:49.715669   59538 buildroot.go:166] provisioning hostname "kubernetes-upgrade-477684"
	I0927 18:33:49.715699   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:33:49.715865   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:49.718551   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.718906   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.718933   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.719151   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:49.719349   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.719484   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.719602   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:49.719749   59538 main.go:141] libmachine: Using SSH client type: native
	I0927 18:33:49.719926   59538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:33:49.719938   59538 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-477684 && echo "kubernetes-upgrade-477684" | sudo tee /etc/hostname
	I0927 18:33:49.848226   59538 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-477684
	
	I0927 18:33:49.848280   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:49.851267   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.851630   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.851669   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.851847   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:49.852007   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.852148   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:49.852330   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:49.852544   59538 main.go:141] libmachine: Using SSH client type: native
	I0927 18:33:49.852762   59538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:33:49.852791   59538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-477684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-477684/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-477684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:33:49.977031   59538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:33:49.977065   59538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:33:49.977094   59538 buildroot.go:174] setting up certificates
	I0927 18:33:49.977103   59538 provision.go:84] configureAuth start
	I0927 18:33:49.977113   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:33:49.977381   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:33:49.980245   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.980695   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.980725   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.980939   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:49.983173   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.983475   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:49.983499   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:49.983695   59538 provision.go:143] copyHostCerts
	I0927 18:33:49.983750   59538 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:33:49.983763   59538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:33:49.983831   59538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:33:49.983943   59538 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:33:49.983955   59538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:33:49.983984   59538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:33:49.984066   59538 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:33:49.984076   59538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:33:49.984102   59538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:33:49.984164   59538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-477684 san=[127.0.0.1 192.168.50.36 kubernetes-upgrade-477684 localhost minikube]
	I0927 18:33:50.068452   59538 provision.go:177] copyRemoteCerts
	I0927 18:33:50.068514   59538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:33:50.068536   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:50.071264   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.071604   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.071632   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.071813   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:50.071999   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.072160   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:50.072290   59538 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:33:50.160761   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:33:50.183416   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0927 18:33:50.205493   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 18:33:50.228311   59538 provision.go:87] duration metric: took 251.197155ms to configureAuth
	I0927 18:33:50.228339   59538 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:33:50.228488   59538 config.go:182] Loaded profile config "kubernetes-upgrade-477684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 18:33:50.228559   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:50.231103   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.231440   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.231471   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.231650   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:50.231824   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.231975   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.232093   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:50.232222   59538 main.go:141] libmachine: Using SSH client type: native
	I0927 18:33:50.232384   59538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:33:50.232402   59538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:33:50.454339   59538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:33:50.454364   59538 main.go:141] libmachine: Checking connection to Docker...
	I0927 18:33:50.454372   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetURL
	I0927 18:33:50.455618   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using libvirt version 6000000
	I0927 18:33:50.457803   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.458065   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.458090   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.458290   59538 main.go:141] libmachine: Docker is up and running!
	I0927 18:33:50.458302   59538 main.go:141] libmachine: Reticulating splines...
	I0927 18:33:50.458308   59538 client.go:171] duration metric: took 27.500854661s to LocalClient.Create
	I0927 18:33:50.458329   59538 start.go:167] duration metric: took 27.500906226s to libmachine.API.Create "kubernetes-upgrade-477684"
	I0927 18:33:50.458339   59538 start.go:293] postStartSetup for "kubernetes-upgrade-477684" (driver="kvm2")
	I0927 18:33:50.458355   59538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:33:50.458374   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:50.458599   59538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:33:50.458631   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:50.461056   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.461385   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.461416   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.461506   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:50.461705   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.461823   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:50.461968   59538 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:33:50.552834   59538 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:33:50.556951   59538 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:33:50.556979   59538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:33:50.557053   59538 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:33:50.557161   59538 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:33:50.557250   59538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:33:50.566240   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:33:50.589814   59538 start.go:296] duration metric: took 131.45696ms for postStartSetup
	I0927 18:33:50.589958   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetConfigRaw
	I0927 18:33:50.590695   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:33:50.593143   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.593482   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.593510   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.593766   59538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json ...
	I0927 18:33:50.593980   59538 start.go:128] duration metric: took 27.660865031s to createHost
	I0927 18:33:50.594012   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:50.596294   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.596632   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.596670   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.596764   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:50.596928   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.597046   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.597169   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:50.597397   59538 main.go:141] libmachine: Using SSH client type: native
	I0927 18:33:50.597561   59538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:33:50.597570   59538 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:33:50.711857   59538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727462030.690768301
	
	I0927 18:33:50.711881   59538 fix.go:216] guest clock: 1727462030.690768301
	I0927 18:33:50.711888   59538 fix.go:229] Guest: 2024-09-27 18:33:50.690768301 +0000 UTC Remote: 2024-09-27 18:33:50.593997584 +0000 UTC m=+35.283159863 (delta=96.770717ms)
	I0927 18:33:50.711934   59538 fix.go:200] guest clock delta is within tolerance: 96.770717ms
	I0927 18:33:50.711945   59538 start.go:83] releasing machines lock for "kubernetes-upgrade-477684", held for 27.779008651s
	I0927 18:33:50.711971   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:50.712267   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:33:50.715462   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.715876   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.715920   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.716117   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:50.716587   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:50.716784   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:33:50.716895   59538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:33:50.716938   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:50.717004   59538 ssh_runner.go:195] Run: cat /version.json
	I0927 18:33:50.717029   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:33:50.719620   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.719961   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.719988   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.720013   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.720212   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:50.720404   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.720566   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:50.720640   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:50.720678   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:50.720700   59538 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:33:50.720963   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:33:50.721170   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:33:50.721296   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:33:50.721458   59538 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:33:50.840594   59538 ssh_runner.go:195] Run: systemctl --version
	I0927 18:33:50.846746   59538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:33:51.014380   59538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 18:33:51.020555   59538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:33:51.020642   59538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:33:51.044016   59538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 18:33:51.044040   59538 start.go:495] detecting cgroup driver to use...
	I0927 18:33:51.044125   59538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:33:51.062379   59538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:33:51.079977   59538 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:33:51.080033   59538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:33:51.095203   59538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:33:51.109781   59538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:33:51.238073   59538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:33:51.380195   59538 docker.go:233] disabling docker service ...
	I0927 18:33:51.380265   59538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:33:51.394814   59538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:33:51.407687   59538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:33:51.556907   59538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:33:51.689654   59538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:33:51.705817   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:33:51.725896   59538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0927 18:33:51.725958   59538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:33:51.736272   59538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:33:51.736347   59538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:33:51.746944   59538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:33:51.757581   59538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:33:51.771770   59538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:33:51.786113   59538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:33:51.796455   59538 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 18:33:51.796514   59538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 18:33:51.819026   59538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:33:51.834415   59538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:33:51.950964   59538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:33:52.044670   59538 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:33:52.044747   59538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:33:52.049315   59538 start.go:563] Will wait 60s for crictl version
	I0927 18:33:52.049379   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:52.052884   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:33:52.090426   59538 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:33:52.090507   59538 ssh_runner.go:195] Run: crio --version
	I0927 18:33:52.118525   59538 ssh_runner.go:195] Run: crio --version
	I0927 18:33:52.150104   59538 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0927 18:33:52.151420   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:33:52.157651   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:52.158170   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:33:38 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:33:52.158206   59538 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:33:52.158455   59538 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 18:33:52.162851   59538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:33:52.175307   59538 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:33:52.175442   59538 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 18:33:52.175498   59538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:33:52.209402   59538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 18:33:52.209495   59538 ssh_runner.go:195] Run: which lz4
	I0927 18:33:52.213511   59538 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 18:33:52.218053   59538 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 18:33:52.218090   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0927 18:33:53.798672   59538 crio.go:462] duration metric: took 1.585184681s to copy over tarball
	I0927 18:33:53.798786   59538 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 18:33:56.694991   59538 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.896165852s)
	I0927 18:33:56.695027   59538 crio.go:469] duration metric: took 2.896302603s to extract the tarball
	I0927 18:33:56.695037   59538 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 18:33:56.745035   59538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:33:56.812265   59538 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0927 18:33:56.812296   59538 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0927 18:33:56.812397   59538 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:33:56.812428   59538 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:56.812453   59538 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0927 18:33:56.812408   59538 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:56.812452   59538 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:56.812406   59538 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:56.812509   59538 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0927 18:33:56.812519   59538 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:56.814123   59538 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:56.814133   59538 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:56.814160   59538 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:56.814177   59538 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0927 18:33:56.814126   59538 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:56.814133   59538 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0927 18:33:56.814132   59538 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:56.814166   59538 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:33:57.024157   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0927 18:33:57.033642   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:57.075411   59538 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0927 18:33:57.075456   59538 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0927 18:33:57.075523   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.096796   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 18:33:57.096901   59538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0927 18:33:57.096941   59538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:57.096985   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.100739   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:57.104077   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0927 18:33:57.129716   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:57.134018   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:57.135115   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:57.137801   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:57.174592   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:57.174629   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 18:33:57.265418   59538 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0927 18:33:57.265466   59538 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0927 18:33:57.265515   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.315939   59538 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0927 18:33:57.315985   59538 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:57.316032   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.344137   59538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0927 18:33:57.344176   59538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:57.344203   59538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0927 18:33:57.344223   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.344242   59538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:57.344264   59538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0927 18:33:57.344284   59538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:57.344300   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.344337   59538 ssh_runner.go:195] Run: which crictl
	I0927 18:33:57.344363   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0927 18:33:57.344457   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0927 18:33:57.344493   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:57.344460   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 18:33:57.363325   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:57.363399   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:57.363725   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:57.474286   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0927 18:33:57.483648   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0927 18:33:57.483692   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 18:33:57.483747   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:57.506426   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:57.506454   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:57.506533   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:57.586030   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0927 18:33:57.586064   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0927 18:33:57.618083   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0927 18:33:57.618138   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0927 18:33:57.618092   59538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0927 18:33:57.690214   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0927 18:33:57.727180   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0927 18:33:57.727883   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0927 18:33:57.727927   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0927 18:33:57.733309   59538 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0927 18:33:58.113716   59538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:33:58.254187   59538 cache_images.go:92] duration metric: took 1.441871962s to LoadCachedImages
	W0927 18:33:58.254320   59538 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19712-11184/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0927 18:33:58.254341   59538 kubeadm.go:934] updating node { 192.168.50.36 8443 v1.20.0 crio true true} ...
	I0927 18:33:58.254468   59538 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-477684 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:33:58.254580   59538 ssh_runner.go:195] Run: crio config
	I0927 18:33:58.302916   59538 cni.go:84] Creating CNI manager for ""
	I0927 18:33:58.302944   59538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:33:58.302959   59538 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:33:58.302991   59538 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-477684 NodeName:kubernetes-upgrade-477684 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0927 18:33:58.303159   59538 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-477684"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:33:58.303235   59538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0927 18:33:58.314111   59538 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:33:58.314184   59538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:33:58.324693   59538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0927 18:33:58.341608   59538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:33:58.359104   59538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0927 18:33:58.378978   59538 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0927 18:33:58.382774   59538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.36	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:33:58.395374   59538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:33:58.528369   59538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:33:58.549607   59538 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684 for IP: 192.168.50.36
	I0927 18:33:58.549647   59538 certs.go:194] generating shared ca certs ...
	I0927 18:33:58.549672   59538 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:58.549882   59538 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:33:58.549950   59538 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:33:58.549965   59538 certs.go:256] generating profile certs ...
	I0927 18:33:58.550030   59538 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.key
	I0927 18:33:58.550061   59538 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.crt with IP's: []
	I0927 18:33:58.672301   59538 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.crt ...
	I0927 18:33:58.672345   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.crt: {Name:mk5204a0e5b4937ef195cde44b8e1ca0a177692c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:58.672555   59538 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.key ...
	I0927 18:33:58.672577   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.key: {Name:mk229208cc216b68cf2fea085820c53561790617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:58.672688   59538 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key.e0436798
	I0927 18:33:58.672710   59538 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt.e0436798 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.36]
	I0927 18:33:58.936596   59538 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt.e0436798 ...
	I0927 18:33:58.936626   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt.e0436798: {Name:mked36441d9b4ba5e2924da1ace2001a9fce1e6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:58.956078   59538 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key.e0436798 ...
	I0927 18:33:58.956118   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key.e0436798: {Name:mkb04d80b0ece00e76f789f7dba567091722e801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:58.956268   59538 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt.e0436798 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt
	I0927 18:33:58.956416   59538 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key.e0436798 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key
	I0927 18:33:58.956500   59538 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.key
	I0927 18:33:58.956528   59538 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.crt with IP's: []
	I0927 18:33:59.070359   59538 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.crt ...
	I0927 18:33:59.070386   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.crt: {Name:mk7fd6675c9824c9df77b1e21abf5cf52c51745d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:59.070553   59538 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.key ...
	I0927 18:33:59.070568   59538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.key: {Name:mk69b88c739649321b7b5bf4350d30834c39468d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:33:59.070800   59538 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:33:59.070842   59538 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:33:59.070853   59538 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:33:59.070877   59538 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:33:59.070903   59538 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:33:59.070923   59538 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:33:59.070959   59538 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:33:59.071609   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:33:59.102759   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:33:59.127446   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:33:59.155101   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:33:59.182773   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 18:33:59.212647   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 18:33:59.238555   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:33:59.265264   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 18:33:59.290698   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:33:59.316237   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:33:59.341803   59538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:33:59.368481   59538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:33:59.395419   59538 ssh_runner.go:195] Run: openssl version
	I0927 18:33:59.401644   59538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:33:59.428654   59538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:33:59.437095   59538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:33:59.437177   59538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:33:59.448968   59538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:33:59.464685   59538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:33:59.475796   59538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:33:59.480651   59538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:33:59.480732   59538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:33:59.486362   59538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:33:59.497459   59538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:33:59.507635   59538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:33:59.512028   59538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:33:59.512090   59538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:33:59.517629   59538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:33:59.528271   59538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:33:59.532174   59538 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 18:33:59.532242   59538 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:33:59.532345   59538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:33:59.532414   59538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:33:59.572499   59538 cri.go:89] found id: ""
	I0927 18:33:59.572577   59538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 18:33:59.582920   59538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 18:33:59.592808   59538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 18:33:59.604464   59538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 18:33:59.604487   59538 kubeadm.go:157] found existing configuration files:
	
	I0927 18:33:59.604534   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 18:33:59.615130   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 18:33:59.615202   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 18:33:59.625290   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 18:33:59.634416   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 18:33:59.634492   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 18:33:59.644170   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 18:33:59.653658   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 18:33:59.653715   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 18:33:59.662899   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 18:33:59.672143   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 18:33:59.672260   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 18:33:59.682362   59538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 18:33:59.802863   59538 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 18:33:59.802917   59538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 18:33:59.950512   59538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 18:33:59.950700   59538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 18:33:59.950856   59538 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 18:34:00.125583   59538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 18:34:00.128194   59538 out.go:235]   - Generating certificates and keys ...
	I0927 18:34:00.128285   59538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 18:34:00.128390   59538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 18:34:00.412856   59538 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 18:34:00.630731   59538 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 18:34:00.851865   59538 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 18:34:01.000871   59538 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 18:34:01.397501   59538 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 18:34:01.397827   59538 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-477684 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	I0927 18:34:01.740643   59538 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 18:34:01.740884   59538 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-477684 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	I0927 18:34:01.944344   59538 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 18:34:02.094374   59538 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 18:34:02.296902   59538 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 18:34:02.297039   59538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 18:34:02.875241   59538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 18:34:03.063170   59538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 18:34:03.432565   59538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 18:34:03.742442   59538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 18:34:03.759737   59538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 18:34:03.760892   59538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 18:34:03.760956   59538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 18:34:03.886949   59538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 18:34:03.889158   59538 out.go:235]   - Booting up control plane ...
	I0927 18:34:03.889276   59538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 18:34:03.893663   59538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 18:34:03.896954   59538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 18:34:03.897059   59538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 18:34:03.902837   59538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 18:34:43.898775   59538 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 18:34:43.899563   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:34:43.899791   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:34:48.899798   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:34:48.900099   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:34:58.899607   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:34:58.899939   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:35:18.899867   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:35:18.900097   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:35:58.902617   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:35:58.902895   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:35:58.902911   59538 kubeadm.go:310] 
	I0927 18:35:58.902964   59538 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 18:35:58.903018   59538 kubeadm.go:310] 		timed out waiting for the condition
	I0927 18:35:58.903028   59538 kubeadm.go:310] 
	I0927 18:35:58.903074   59538 kubeadm.go:310] 	This error is likely caused by:
	I0927 18:35:58.903119   59538 kubeadm.go:310] 		- The kubelet is not running
	I0927 18:35:58.903260   59538 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 18:35:58.903270   59538 kubeadm.go:310] 
	I0927 18:35:58.903403   59538 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 18:35:58.903450   59538 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 18:35:58.903500   59538 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 18:35:58.903507   59538 kubeadm.go:310] 
	I0927 18:35:58.903648   59538 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 18:35:58.903755   59538 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 18:35:58.903762   59538 kubeadm.go:310] 
	I0927 18:35:58.903907   59538 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 18:35:58.904020   59538 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 18:35:58.904117   59538 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 18:35:58.904210   59538 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 18:35:58.904217   59538 kubeadm.go:310] 
	I0927 18:35:58.904709   59538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 18:35:58.904841   59538 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 18:35:58.904967   59538 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0927 18:35:58.905128   59538 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-477684 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-477684 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-477684 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-477684 localhost] and IPs [192.168.50.36 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0927 18:35:58.905177   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0927 18:35:59.525426   59538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:35:59.540961   59538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 18:35:59.550809   59538 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 18:35:59.550836   59538 kubeadm.go:157] found existing configuration files:
	
	I0927 18:35:59.550892   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 18:35:59.561307   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 18:35:59.561372   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 18:35:59.573071   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 18:35:59.583486   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 18:35:59.583557   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 18:35:59.593952   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 18:35:59.603759   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 18:35:59.603830   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 18:35:59.617092   59538 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 18:35:59.627361   59538 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 18:35:59.627434   59538 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 18:35:59.638074   59538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 18:35:59.733156   59538 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0927 18:35:59.733482   59538 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 18:35:59.932129   59538 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 18:35:59.932326   59538 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 18:35:59.932546   59538 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0927 18:36:00.182707   59538 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 18:36:00.282615   59538 out.go:235]   - Generating certificates and keys ...
	I0927 18:36:00.282825   59538 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 18:36:00.282943   59538 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 18:36:00.283087   59538 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0927 18:36:00.283175   59538 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0927 18:36:00.283278   59538 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0927 18:36:00.283366   59538 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0927 18:36:00.283462   59538 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0927 18:36:00.283557   59538 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0927 18:36:00.283709   59538 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0927 18:36:00.283802   59538 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0927 18:36:00.283868   59538 kubeadm.go:310] [certs] Using the existing "sa" key
	I0927 18:36:00.283944   59538 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 18:36:00.563990   59538 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 18:36:00.819785   59538 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 18:36:01.014934   59538 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 18:36:01.063342   59538 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 18:36:01.081158   59538 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 18:36:01.081321   59538 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 18:36:01.081422   59538 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 18:36:01.244759   59538 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 18:36:01.487678   59538 out.go:235]   - Booting up control plane ...
	I0927 18:36:01.487825   59538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 18:36:01.487941   59538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 18:36:01.488034   59538 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 18:36:01.488143   59538 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 18:36:01.488473   59538 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0927 18:36:41.266210   59538 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0927 18:36:41.266613   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:36:41.267036   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:36:46.267865   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:36:46.268148   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:36:56.268232   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:36:56.268528   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:37:16.267494   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:37:16.267682   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:37:56.267224   59538 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0927 18:37:56.267501   59538 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0927 18:37:56.267531   59538 kubeadm.go:310] 
	I0927 18:37:56.267602   59538 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0927 18:37:56.267655   59538 kubeadm.go:310] 		timed out waiting for the condition
	I0927 18:37:56.267668   59538 kubeadm.go:310] 
	I0927 18:37:56.267713   59538 kubeadm.go:310] 	This error is likely caused by:
	I0927 18:37:56.267763   59538 kubeadm.go:310] 		- The kubelet is not running
	I0927 18:37:56.267907   59538 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0927 18:37:56.267919   59538 kubeadm.go:310] 
	I0927 18:37:56.268057   59538 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0927 18:37:56.268112   59538 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0927 18:37:56.268154   59538 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0927 18:37:56.268161   59538 kubeadm.go:310] 
	I0927 18:37:56.268262   59538 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0927 18:37:56.268389   59538 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0927 18:37:56.268412   59538 kubeadm.go:310] 
	I0927 18:37:56.268551   59538 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0927 18:37:56.268690   59538 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0927 18:37:56.268801   59538 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0927 18:37:56.268914   59538 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0927 18:37:56.268935   59538 kubeadm.go:310] 
	I0927 18:37:56.269772   59538 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 18:37:56.269873   59538 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0927 18:37:56.269957   59538 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0927 18:37:56.270056   59538 kubeadm.go:394] duration metric: took 3m56.737816605s to StartCluster
	I0927 18:37:56.270144   59538 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0927 18:37:56.270274   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0927 18:37:56.322915   59538 cri.go:89] found id: ""
	I0927 18:37:56.322952   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.322964   59538 logs.go:278] No container was found matching "kube-apiserver"
	I0927 18:37:56.322974   59538 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0927 18:37:56.323052   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0927 18:37:56.363183   59538 cri.go:89] found id: ""
	I0927 18:37:56.363213   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.363224   59538 logs.go:278] No container was found matching "etcd"
	I0927 18:37:56.363232   59538 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0927 18:37:56.363306   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0927 18:37:56.401594   59538 cri.go:89] found id: ""
	I0927 18:37:56.401626   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.401637   59538 logs.go:278] No container was found matching "coredns"
	I0927 18:37:56.401652   59538 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0927 18:37:56.401720   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0927 18:37:56.436705   59538 cri.go:89] found id: ""
	I0927 18:37:56.436730   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.436738   59538 logs.go:278] No container was found matching "kube-scheduler"
	I0927 18:37:56.436743   59538 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0927 18:37:56.436797   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0927 18:37:56.471887   59538 cri.go:89] found id: ""
	I0927 18:37:56.471919   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.471930   59538 logs.go:278] No container was found matching "kube-proxy"
	I0927 18:37:56.471938   59538 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0927 18:37:56.472004   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0927 18:37:56.507966   59538 cri.go:89] found id: ""
	I0927 18:37:56.507997   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.508007   59538 logs.go:278] No container was found matching "kube-controller-manager"
	I0927 18:37:56.508015   59538 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0927 18:37:56.508081   59538 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0927 18:37:56.542707   59538 cri.go:89] found id: ""
	I0927 18:37:56.542740   59538 logs.go:276] 0 containers: []
	W0927 18:37:56.542752   59538 logs.go:278] No container was found matching "kindnet"
	I0927 18:37:56.542764   59538 logs.go:123] Gathering logs for kubelet ...
	I0927 18:37:56.542779   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0927 18:37:56.596363   59538 logs.go:123] Gathering logs for dmesg ...
	I0927 18:37:56.596398   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0927 18:37:56.611427   59538 logs.go:123] Gathering logs for describe nodes ...
	I0927 18:37:56.611461   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0927 18:37:56.748581   59538 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0927 18:37:56.748612   59538 logs.go:123] Gathering logs for CRI-O ...
	I0927 18:37:56.748629   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0927 18:37:56.864936   59538 logs.go:123] Gathering logs for container status ...
	I0927 18:37:56.864974   59538 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0927 18:37:56.924314   59538 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0927 18:37:56.924384   59538 out.go:270] * 
	* 
	W0927 18:37:56.924446   59538 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 18:37:56.924465   59538 out.go:270] * 
	* 
	W0927 18:37:56.925758   59538 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0927 18:37:56.928956   59538 out.go:201] 
	W0927 18:37:56.930419   59538 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0927 18:37:56.930475   59538 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0927 18:37:56.930510   59538 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0927 18:37:56.932071   59538 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-477684
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-477684: (1.421729827s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-477684 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-477684 status --format={{.Host}}: exit status 7 (75.410445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.087237989s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-477684 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.548942ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-477684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-477684
	    minikube start -p kubernetes-upgrade-477684 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4776842 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-477684 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-477684 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.210508532s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-27 18:39:36.936743203 +0000 UTC m=+6225.151943788
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-477684 -n kubernetes-upgrade-477684
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-477684 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-477684 logs -n 25: (1.785590572s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-268892 sudo journalctl                       | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | -xeu kubelet --all --full                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo docker                           | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo                                  | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo cat                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo containerd                       | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo systemctl                        | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo find                             | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 sudo crio                             | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-268892                                       | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	| start   | -p custom-flannel-268892                             | custom-flannel-268892     | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                           |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-477684                         | kubernetes-upgrade-477684 | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-477684                         | kubernetes-upgrade-477684 | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:39 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:38:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:38:37.770578   67636 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:38:37.770717   67636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:38:37.770727   67636 out.go:358] Setting ErrFile to fd 2...
	I0927 18:38:37.770732   67636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:38:37.770929   67636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:38:37.771496   67636 out.go:352] Setting JSON to false
	I0927 18:38:37.772597   67636 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8463,"bootTime":1727453855,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:38:37.772697   67636 start.go:139] virtualization: kvm guest
	I0927 18:38:37.774814   67636 out.go:177] * [kubernetes-upgrade-477684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:38:37.776158   67636 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:38:37.776189   67636 notify.go:220] Checking for updates...
	I0927 18:38:37.778390   67636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:38:37.779453   67636 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:38:37.780495   67636 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:38:37.781751   67636 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:38:37.782963   67636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:38:37.784564   67636 config.go:182] Loaded profile config "kubernetes-upgrade-477684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:38:37.785024   67636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:38:37.785087   67636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:38:37.800737   67636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43659
	I0927 18:38:37.801260   67636 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:38:37.801871   67636 main.go:141] libmachine: Using API Version  1
	I0927 18:38:37.801892   67636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:38:37.802373   67636 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:38:37.802660   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:38:37.802982   67636 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:38:37.803406   67636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:38:37.803458   67636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:38:37.821610   67636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I0927 18:38:37.822106   67636 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:38:37.822708   67636 main.go:141] libmachine: Using API Version  1
	I0927 18:38:37.822744   67636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:38:37.823115   67636 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:38:37.823381   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:38:37.862754   67636 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:38:37.864368   67636 start.go:297] selected driver: kvm2
	I0927 18:38:37.864386   67636 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:38:37.864503   67636 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:38:37.865233   67636 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:38:37.865299   67636 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:38:37.881882   67636 install.go:137] /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:38:37.882483   67636 cni.go:84] Creating CNI manager for ""
	I0927 18:38:37.882547   67636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:38:37.882587   67636 start.go:340] cluster config:
	{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:38:37.882783   67636 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:38:37.884578   67636 out.go:177] * Starting "kubernetes-upgrade-477684" primary control-plane node in "kubernetes-upgrade-477684" cluster
	I0927 18:38:39.834102   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:39.834697   66146 main.go:141] libmachine: (calico-268892) DBG | unable to find current IP address of domain calico-268892 in network mk-calico-268892
	I0927 18:38:39.834728   66146 main.go:141] libmachine: (calico-268892) DBG | I0927 18:38:39.834614   66168 retry.go:31] will retry after 3.551029195s: waiting for machine to come up
	I0927 18:38:37.720408   67596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:38:37.720450   67596 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:38:37.720457   67596 cache.go:56] Caching tarball of preloaded images
	I0927 18:38:37.720534   67596 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:38:37.720551   67596 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:38:37.720640   67596 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/config.json ...
	I0927 18:38:37.720660   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/config.json: {Name:mk5323a017e441c31f366d003deb54fa8b5d9022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:37.720814   67596 start.go:360] acquireMachinesLock for custom-flannel-268892: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:38:37.885617   67636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:38:37.885673   67636 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:38:37.885690   67636 cache.go:56] Caching tarball of preloaded images
	I0927 18:38:37.885774   67636 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:38:37.885787   67636 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:38:37.885903   67636 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json ...
	I0927 18:38:37.886119   67636 start.go:360] acquireMachinesLock for kubernetes-upgrade-477684: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:38:44.815129   67596 start.go:364] duration metric: took 7.094281519s to acquireMachinesLock for "custom-flannel-268892"
	I0927 18:38:44.815188   67596 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-268892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 18:38:44.815318   67596 start.go:125] createHost starting for "" (driver="kvm2")
	I0927 18:38:43.386821   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.387248   66146 main.go:141] libmachine: (calico-268892) Found IP for machine: 192.168.61.173
	I0927 18:38:43.387278   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has current primary IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.387286   66146 main.go:141] libmachine: (calico-268892) Reserving static IP address...
	I0927 18:38:43.387634   66146 main.go:141] libmachine: (calico-268892) DBG | unable to find host DHCP lease matching {name: "calico-268892", mac: "52:54:00:1f:ac:f6", ip: "192.168.61.173"} in network mk-calico-268892
	I0927 18:38:43.469452   66146 main.go:141] libmachine: (calico-268892) DBG | Getting to WaitForSSH function...
	I0927 18:38:43.469517   66146 main.go:141] libmachine: (calico-268892) Reserved static IP address: 192.168.61.173
	I0927 18:38:43.469533   66146 main.go:141] libmachine: (calico-268892) Waiting for SSH to be available...
	I0927 18:38:43.472126   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.472528   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:43.472556   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.472643   66146 main.go:141] libmachine: (calico-268892) DBG | Using SSH client type: external
	I0927 18:38:43.472676   66146 main.go:141] libmachine: (calico-268892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa (-rw-------)
	I0927 18:38:43.472714   66146 main.go:141] libmachine: (calico-268892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.173 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:38:43.472730   66146 main.go:141] libmachine: (calico-268892) DBG | About to run SSH command:
	I0927 18:38:43.472757   66146 main.go:141] libmachine: (calico-268892) DBG | exit 0
	I0927 18:38:43.602762   66146 main.go:141] libmachine: (calico-268892) DBG | SSH cmd err, output: <nil>: 
	I0927 18:38:43.603110   66146 main.go:141] libmachine: (calico-268892) KVM machine creation complete!
	I0927 18:38:43.603391   66146 main.go:141] libmachine: (calico-268892) Calling .GetConfigRaw
	I0927 18:38:43.603972   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:43.604184   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:43.604367   66146 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 18:38:43.604389   66146 main.go:141] libmachine: (calico-268892) Calling .GetState
	I0927 18:38:43.605734   66146 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 18:38:43.605749   66146 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 18:38:43.605756   66146 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 18:38:43.605764   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:43.608614   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.608959   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:43.608991   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.609161   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:43.609423   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.609628   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.609804   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:43.609991   66146 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:43.610180   66146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.173 22 <nil> <nil>}
	I0927 18:38:43.610192   66146 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 18:38:43.718604   66146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:38:43.718628   66146 main.go:141] libmachine: Detecting the provisioner...
	I0927 18:38:43.718638   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:43.721855   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.722249   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:43.722279   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.722486   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:43.722709   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.722874   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.722991   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:43.723173   66146 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:43.723382   66146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.173 22 <nil> <nil>}
	I0927 18:38:43.723395   66146 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 18:38:43.835226   66146 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 18:38:43.835312   66146 main.go:141] libmachine: found compatible host: buildroot
	I0927 18:38:43.835320   66146 main.go:141] libmachine: Provisioning with buildroot...
	I0927 18:38:43.835327   66146 main.go:141] libmachine: (calico-268892) Calling .GetMachineName
	I0927 18:38:43.835627   66146 buildroot.go:166] provisioning hostname "calico-268892"
	I0927 18:38:43.835655   66146 main.go:141] libmachine: (calico-268892) Calling .GetMachineName
	I0927 18:38:43.835842   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:43.838493   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.838941   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:43.838973   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.839054   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:43.839247   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.839391   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.839557   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:43.839672   66146 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:43.839868   66146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.173 22 <nil> <nil>}
	I0927 18:38:43.839883   66146 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-268892 && echo "calico-268892" | sudo tee /etc/hostname
	I0927 18:38:43.964614   66146 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-268892
	
	I0927 18:38:43.964641   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:43.967594   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.967934   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:43.967966   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:43.968196   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:43.968414   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.968547   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:43.968792   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:43.968974   66146 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:43.969182   66146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.173 22 <nil> <nil>}
	I0927 18:38:43.969204   66146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-268892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-268892/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-268892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:38:44.086863   66146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:38:44.086888   66146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:38:44.086931   66146 buildroot.go:174] setting up certificates
	I0927 18:38:44.086939   66146 provision.go:84] configureAuth start
	I0927 18:38:44.086950   66146 main.go:141] libmachine: (calico-268892) Calling .GetMachineName
	I0927 18:38:44.087229   66146 main.go:141] libmachine: (calico-268892) Calling .GetIP
	I0927 18:38:44.089832   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.090175   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.090204   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.090381   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.092532   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.092904   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.092929   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.093045   66146 provision.go:143] copyHostCerts
	I0927 18:38:44.093103   66146 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:38:44.093116   66146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:38:44.093181   66146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:38:44.093325   66146 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:38:44.093337   66146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:38:44.093377   66146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:38:44.093475   66146 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:38:44.093485   66146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:38:44.093519   66146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:38:44.093604   66146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.calico-268892 san=[127.0.0.1 192.168.61.173 calico-268892 localhost minikube]
	I0927 18:38:44.179747   66146 provision.go:177] copyRemoteCerts
	I0927 18:38:44.179818   66146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:38:44.179845   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.182917   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.183211   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.183246   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.183452   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:44.183670   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.183842   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:44.184038   66146 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa Username:docker}
	I0927 18:38:44.268581   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:38:44.291950   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 18:38:44.315124   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 18:38:44.339686   66146 provision.go:87] duration metric: took 252.733734ms to configureAuth
	I0927 18:38:44.339714   66146 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:38:44.339878   66146 config.go:182] Loaded profile config "calico-268892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:38:44.339949   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.342559   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.342913   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.342940   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.343179   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:44.343381   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.343543   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.343673   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:44.343887   66146 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:44.344090   66146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.173 22 <nil> <nil>}
	I0927 18:38:44.344105   66146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:38:44.565812   66146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:38:44.565842   66146 main.go:141] libmachine: Checking connection to Docker...
	I0927 18:38:44.565853   66146 main.go:141] libmachine: (calico-268892) Calling .GetURL
	I0927 18:38:44.567339   66146 main.go:141] libmachine: (calico-268892) DBG | Using libvirt version 6000000
	I0927 18:38:44.569835   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.570268   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.570294   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.570511   66146 main.go:141] libmachine: Docker is up and running!
	I0927 18:38:44.570521   66146 main.go:141] libmachine: Reticulating splines...
	I0927 18:38:44.570530   66146 client.go:171] duration metric: took 22.054899841s to LocalClient.Create
	I0927 18:38:44.570553   66146 start.go:167] duration metric: took 22.054964684s to libmachine.API.Create "calico-268892"
	I0927 18:38:44.570576   66146 start.go:293] postStartSetup for "calico-268892" (driver="kvm2")
	I0927 18:38:44.570589   66146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:38:44.570608   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:44.570909   66146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:38:44.570937   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.573174   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.573535   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.573568   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.573774   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:44.573957   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.574111   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:44.574217   66146 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa Username:docker}
	I0927 18:38:44.660674   66146 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:38:44.664969   66146 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:38:44.664990   66146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:38:44.665044   66146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:38:44.665121   66146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:38:44.665211   66146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:38:44.674033   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:38:44.697332   66146 start.go:296] duration metric: took 126.742795ms for postStartSetup
	I0927 18:38:44.697380   66146 main.go:141] libmachine: (calico-268892) Calling .GetConfigRaw
	I0927 18:38:44.697982   66146 main.go:141] libmachine: (calico-268892) Calling .GetIP
	I0927 18:38:44.700531   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.700802   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.700825   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.701109   66146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/config.json ...
	I0927 18:38:44.701400   66146 start.go:128] duration metric: took 22.20655578s to createHost
	I0927 18:38:44.701425   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.703624   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.703901   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.703940   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.704058   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:44.704213   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.704361   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.704512   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:44.704665   66146 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:44.704807   66146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.173 22 <nil> <nil>}
	I0927 18:38:44.704816   66146 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:38:44.814943   66146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727462324.783950119
	
	I0927 18:38:44.814970   66146 fix.go:216] guest clock: 1727462324.783950119
	I0927 18:38:44.814979   66146 fix.go:229] Guest: 2024-09-27 18:38:44.783950119 +0000 UTC Remote: 2024-09-27 18:38:44.701414056 +0000 UTC m=+22.328499104 (delta=82.536063ms)
	I0927 18:38:44.815030   66146 fix.go:200] guest clock delta is within tolerance: 82.536063ms
	I0927 18:38:44.815037   66146 start.go:83] releasing machines lock for "calico-268892", held for 22.320297301s
	I0927 18:38:44.815080   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:44.815358   66146 main.go:141] libmachine: (calico-268892) Calling .GetIP
	I0927 18:38:44.818320   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.818640   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.818698   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.818816   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:44.819251   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:44.819434   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:38:44.819528   66146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:38:44.819576   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.819690   66146 ssh_runner.go:195] Run: cat /version.json
	I0927 18:38:44.819707   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:38:44.822402   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.822701   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.822818   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.822843   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.822995   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:44.823058   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:44.823083   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:44.823201   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.823262   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:38:44.823403   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:44.823458   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:38:44.823624   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:38:44.823628   66146 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa Username:docker}
	I0927 18:38:44.823770   66146 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa Username:docker}
	I0927 18:38:44.904166   66146 ssh_runner.go:195] Run: systemctl --version
	I0927 18:38:44.946957   66146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:38:45.113961   66146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 18:38:45.119318   66146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:38:45.119391   66146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:38:45.135405   66146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 18:38:45.135428   66146 start.go:495] detecting cgroup driver to use...
	I0927 18:38:45.135482   66146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:38:45.156234   66146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:38:45.170841   66146 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:38:45.170898   66146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:38:45.184883   66146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:38:45.198375   66146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:38:45.321108   66146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:38:45.475040   66146 docker.go:233] disabling docker service ...
	I0927 18:38:45.475115   66146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:38:45.489859   66146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:38:45.503485   66146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:38:45.644156   66146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:38:45.772549   66146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:38:45.789926   66146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:38:45.808160   66146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 18:38:45.808211   66146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.821028   66146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:38:45.821096   66146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.833953   66146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.844006   66146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.854437   66146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:38:45.864770   66146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.874978   66146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.893216   66146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:38:45.903582   66146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:38:45.913211   66146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 18:38:45.913277   66146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 18:38:45.927863   66146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:38:45.937818   66146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:38:46.058150   66146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:38:46.159502   66146 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:38:46.159570   66146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:38:46.164540   66146 start.go:563] Will wait 60s for crictl version
	I0927 18:38:46.164594   66146 ssh_runner.go:195] Run: which crictl
	I0927 18:38:46.168025   66146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:38:46.207908   66146 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:38:46.207993   66146 ssh_runner.go:195] Run: crio --version
	I0927 18:38:46.235278   66146 ssh_runner.go:195] Run: crio --version
	I0927 18:38:46.267243   66146 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 18:38:46.268493   66146 main.go:141] libmachine: (calico-268892) Calling .GetIP
	I0927 18:38:46.272378   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:46.272792   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:38:46.272823   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:38:46.273103   66146 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 18:38:46.277308   66146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:38:46.292404   66146 kubeadm.go:883] updating cluster {Name:calico-268892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:calico-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:38:46.292560   66146 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:38:46.292634   66146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:38:46.323971   66146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 18:38:46.324049   66146 ssh_runner.go:195] Run: which lz4
	I0927 18:38:46.327674   66146 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 18:38:46.331766   66146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 18:38:46.331803   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 18:38:44.817265   67596 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0927 18:38:44.817445   67596 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:38:44.817489   67596 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:38:44.834984   67596 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0927 18:38:44.835462   67596 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:38:44.836062   67596 main.go:141] libmachine: Using API Version  1
	I0927 18:38:44.836082   67596 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:38:44.836441   67596 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:38:44.836630   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetMachineName
	I0927 18:38:44.836774   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:38:44.836939   67596 start.go:159] libmachine.API.Create for "custom-flannel-268892" (driver="kvm2")
	I0927 18:38:44.836989   67596 client.go:168] LocalClient.Create starting
	I0927 18:38:44.837031   67596 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem
	I0927 18:38:44.837078   67596 main.go:141] libmachine: Decoding PEM data...
	I0927 18:38:44.837105   67596 main.go:141] libmachine: Parsing certificate...
	I0927 18:38:44.837172   67596 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem
	I0927 18:38:44.837211   67596 main.go:141] libmachine: Decoding PEM data...
	I0927 18:38:44.837229   67596 main.go:141] libmachine: Parsing certificate...
	I0927 18:38:44.837266   67596 main.go:141] libmachine: Running pre-create checks...
	I0927 18:38:44.837280   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .PreCreateCheck
	I0927 18:38:44.837738   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetConfigRaw
	I0927 18:38:44.838137   67596 main.go:141] libmachine: Creating machine...
	I0927 18:38:44.838151   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .Create
	I0927 18:38:44.838277   67596 main.go:141] libmachine: (custom-flannel-268892) Creating KVM machine...
	I0927 18:38:44.839888   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found existing default KVM network
	I0927 18:38:44.841661   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:44.841476   67736 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d7e0}
	I0927 18:38:44.841686   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | created network xml: 
	I0927 18:38:44.841752   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | <network>
	I0927 18:38:44.841780   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |   <name>mk-custom-flannel-268892</name>
	I0927 18:38:44.841791   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |   <dns enable='no'/>
	I0927 18:38:44.841800   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |   
	I0927 18:38:44.841811   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0927 18:38:44.841819   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |     <dhcp>
	I0927 18:38:44.841832   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0927 18:38:44.841842   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |     </dhcp>
	I0927 18:38:44.841860   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |   </ip>
	I0927 18:38:44.841869   67596 main.go:141] libmachine: (custom-flannel-268892) DBG |   
	I0927 18:38:44.841880   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | </network>
	I0927 18:38:44.841888   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | 
	I0927 18:38:44.847493   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | trying to create private KVM network mk-custom-flannel-268892 192.168.39.0/24...
	I0927 18:38:44.925015   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | private KVM network mk-custom-flannel-268892 192.168.39.0/24 created
	I0927 18:38:44.925078   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:44.924974   67736 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:38:44.925101   67596 main.go:141] libmachine: (custom-flannel-268892) Setting up store path in /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892 ...
	I0927 18:38:44.925121   67596 main.go:141] libmachine: (custom-flannel-268892) Building disk image from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 18:38:44.925140   67596 main.go:141] libmachine: (custom-flannel-268892) Downloading /home/jenkins/minikube-integration/19712-11184/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I0927 18:38:45.180875   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:45.180672   67736 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa...
	I0927 18:38:45.258906   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:45.258777   67736 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/custom-flannel-268892.rawdisk...
	I0927 18:38:45.258933   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Writing magic tar header
	I0927 18:38:45.258951   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Writing SSH key tar header
	I0927 18:38:45.258963   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:45.258884   67736 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892 ...
	I0927 18:38:45.258982   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892
	I0927 18:38:45.259014   67596 main.go:141] libmachine: (custom-flannel-268892) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892 (perms=drwx------)
	I0927 18:38:45.259020   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube/machines
	I0927 18:38:45.259030   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:38:45.259041   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19712-11184
	I0927 18:38:45.259062   67596 main.go:141] libmachine: (custom-flannel-268892) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube/machines (perms=drwxr-xr-x)
	I0927 18:38:45.259076   67596 main.go:141] libmachine: (custom-flannel-268892) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184/.minikube (perms=drwxr-xr-x)
	I0927 18:38:45.259084   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0927 18:38:45.259091   67596 main.go:141] libmachine: (custom-flannel-268892) Setting executable bit set on /home/jenkins/minikube-integration/19712-11184 (perms=drwxrwxr-x)
	I0927 18:38:45.259098   67596 main.go:141] libmachine: (custom-flannel-268892) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0927 18:38:45.259107   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home/jenkins
	I0927 18:38:45.259117   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Checking permissions on dir: /home
	I0927 18:38:45.259122   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Skipping /home - not owner
	I0927 18:38:45.259131   67596 main.go:141] libmachine: (custom-flannel-268892) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0927 18:38:45.259140   67596 main.go:141] libmachine: (custom-flannel-268892) Creating domain...
	I0927 18:38:45.260441   67596 main.go:141] libmachine: (custom-flannel-268892) define libvirt domain using xml: 
	I0927 18:38:45.260461   67596 main.go:141] libmachine: (custom-flannel-268892) <domain type='kvm'>
	I0927 18:38:45.260478   67596 main.go:141] libmachine: (custom-flannel-268892)   <name>custom-flannel-268892</name>
	I0927 18:38:45.260487   67596 main.go:141] libmachine: (custom-flannel-268892)   <memory unit='MiB'>3072</memory>
	I0927 18:38:45.260495   67596 main.go:141] libmachine: (custom-flannel-268892)   <vcpu>2</vcpu>
	I0927 18:38:45.260503   67596 main.go:141] libmachine: (custom-flannel-268892)   <features>
	I0927 18:38:45.260511   67596 main.go:141] libmachine: (custom-flannel-268892)     <acpi/>
	I0927 18:38:45.260518   67596 main.go:141] libmachine: (custom-flannel-268892)     <apic/>
	I0927 18:38:45.260526   67596 main.go:141] libmachine: (custom-flannel-268892)     <pae/>
	I0927 18:38:45.260537   67596 main.go:141] libmachine: (custom-flannel-268892)     
	I0927 18:38:45.260542   67596 main.go:141] libmachine: (custom-flannel-268892)   </features>
	I0927 18:38:45.260546   67596 main.go:141] libmachine: (custom-flannel-268892)   <cpu mode='host-passthrough'>
	I0927 18:38:45.260550   67596 main.go:141] libmachine: (custom-flannel-268892)   
	I0927 18:38:45.260557   67596 main.go:141] libmachine: (custom-flannel-268892)   </cpu>
	I0927 18:38:45.260594   67596 main.go:141] libmachine: (custom-flannel-268892)   <os>
	I0927 18:38:45.260616   67596 main.go:141] libmachine: (custom-flannel-268892)     <type>hvm</type>
	I0927 18:38:45.260622   67596 main.go:141] libmachine: (custom-flannel-268892)     <boot dev='cdrom'/>
	I0927 18:38:45.260626   67596 main.go:141] libmachine: (custom-flannel-268892)     <boot dev='hd'/>
	I0927 18:38:45.260632   67596 main.go:141] libmachine: (custom-flannel-268892)     <bootmenu enable='no'/>
	I0927 18:38:45.260639   67596 main.go:141] libmachine: (custom-flannel-268892)   </os>
	I0927 18:38:45.260644   67596 main.go:141] libmachine: (custom-flannel-268892)   <devices>
	I0927 18:38:45.260651   67596 main.go:141] libmachine: (custom-flannel-268892)     <disk type='file' device='cdrom'>
	I0927 18:38:45.260668   67596 main.go:141] libmachine: (custom-flannel-268892)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/boot2docker.iso'/>
	I0927 18:38:45.260676   67596 main.go:141] libmachine: (custom-flannel-268892)       <target dev='hdc' bus='scsi'/>
	I0927 18:38:45.260684   67596 main.go:141] libmachine: (custom-flannel-268892)       <readonly/>
	I0927 18:38:45.260694   67596 main.go:141] libmachine: (custom-flannel-268892)     </disk>
	I0927 18:38:45.260724   67596 main.go:141] libmachine: (custom-flannel-268892)     <disk type='file' device='disk'>
	I0927 18:38:45.260746   67596 main.go:141] libmachine: (custom-flannel-268892)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0927 18:38:45.260764   67596 main.go:141] libmachine: (custom-flannel-268892)       <source file='/home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/custom-flannel-268892.rawdisk'/>
	I0927 18:38:45.260777   67596 main.go:141] libmachine: (custom-flannel-268892)       <target dev='hda' bus='virtio'/>
	I0927 18:38:45.260788   67596 main.go:141] libmachine: (custom-flannel-268892)     </disk>
	I0927 18:38:45.260796   67596 main.go:141] libmachine: (custom-flannel-268892)     <interface type='network'>
	I0927 18:38:45.260810   67596 main.go:141] libmachine: (custom-flannel-268892)       <source network='mk-custom-flannel-268892'/>
	I0927 18:38:45.260821   67596 main.go:141] libmachine: (custom-flannel-268892)       <model type='virtio'/>
	I0927 18:38:45.260834   67596 main.go:141] libmachine: (custom-flannel-268892)     </interface>
	I0927 18:38:45.260845   67596 main.go:141] libmachine: (custom-flannel-268892)     <interface type='network'>
	I0927 18:38:45.260862   67596 main.go:141] libmachine: (custom-flannel-268892)       <source network='default'/>
	I0927 18:38:45.260873   67596 main.go:141] libmachine: (custom-flannel-268892)       <model type='virtio'/>
	I0927 18:38:45.260881   67596 main.go:141] libmachine: (custom-flannel-268892)     </interface>
	I0927 18:38:45.260891   67596 main.go:141] libmachine: (custom-flannel-268892)     <serial type='pty'>
	I0927 18:38:45.260900   67596 main.go:141] libmachine: (custom-flannel-268892)       <target port='0'/>
	I0927 18:38:45.260909   67596 main.go:141] libmachine: (custom-flannel-268892)     </serial>
	I0927 18:38:45.260916   67596 main.go:141] libmachine: (custom-flannel-268892)     <console type='pty'>
	I0927 18:38:45.260931   67596 main.go:141] libmachine: (custom-flannel-268892)       <target type='serial' port='0'/>
	I0927 18:38:45.260941   67596 main.go:141] libmachine: (custom-flannel-268892)     </console>
	I0927 18:38:45.260950   67596 main.go:141] libmachine: (custom-flannel-268892)     <rng model='virtio'>
	I0927 18:38:45.260961   67596 main.go:141] libmachine: (custom-flannel-268892)       <backend model='random'>/dev/random</backend>
	I0927 18:38:45.260972   67596 main.go:141] libmachine: (custom-flannel-268892)     </rng>
	I0927 18:38:45.260980   67596 main.go:141] libmachine: (custom-flannel-268892)     
	I0927 18:38:45.260991   67596 main.go:141] libmachine: (custom-flannel-268892)     
	I0927 18:38:45.260997   67596 main.go:141] libmachine: (custom-flannel-268892)   </devices>
	I0927 18:38:45.261005   67596 main.go:141] libmachine: (custom-flannel-268892) </domain>
	I0927 18:38:45.261009   67596 main.go:141] libmachine: (custom-flannel-268892) 
	I0927 18:38:45.264956   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:3b:72:81 in network default
	I0927 18:38:45.265513   67596 main.go:141] libmachine: (custom-flannel-268892) Ensuring networks are active...
	I0927 18:38:45.265532   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:45.266322   67596 main.go:141] libmachine: (custom-flannel-268892) Ensuring network default is active
	I0927 18:38:45.266717   67596 main.go:141] libmachine: (custom-flannel-268892) Ensuring network mk-custom-flannel-268892 is active
	I0927 18:38:45.267299   67596 main.go:141] libmachine: (custom-flannel-268892) Getting domain xml...
	I0927 18:38:45.268010   67596 main.go:141] libmachine: (custom-flannel-268892) Creating domain...
	I0927 18:38:46.650814   67596 main.go:141] libmachine: (custom-flannel-268892) Waiting to get IP...
	I0927 18:38:46.651901   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:46.652379   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:46.652434   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:46.652363   67736 retry.go:31] will retry after 303.285766ms: waiting for machine to come up
	I0927 18:38:46.956953   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:46.957716   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:46.957753   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:46.957679   67736 retry.go:31] will retry after 320.370856ms: waiting for machine to come up
	I0927 18:38:47.279379   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:47.279872   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:47.279892   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:47.279825   67736 retry.go:31] will retry after 440.05211ms: waiting for machine to come up
	I0927 18:38:47.732883   66146 crio.go:462] duration metric: took 1.40523336s to copy over tarball
	I0927 18:38:47.732973   66146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 18:38:50.371348   66146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.638311444s)
	I0927 18:38:50.371382   66146 crio.go:469] duration metric: took 2.63846287s to extract the tarball
	I0927 18:38:50.371392   66146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 18:38:50.407369   66146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:38:50.452005   66146 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:38:50.452032   66146 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:38:50.452043   66146 kubeadm.go:934] updating node { 192.168.61.173 8443 v1.31.1 crio true true} ...
	I0927 18:38:50.452164   66146 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-268892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:calico-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0927 18:38:50.452256   66146 ssh_runner.go:195] Run: crio config
	I0927 18:38:50.501928   66146 cni.go:84] Creating CNI manager for "calico"
	I0927 18:38:50.501952   66146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:38:50.501972   66146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.173 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-268892 NodeName:calico-268892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.173"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:38:50.502160   66146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-268892"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.173"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:38:50.502227   66146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:38:50.513372   66146 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:38:50.513442   66146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:38:50.523457   66146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0927 18:38:50.541235   66146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:38:50.557558   66146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0927 18:38:50.575054   66146 ssh_runner.go:195] Run: grep 192.168.61.173	control-plane.minikube.internal$ /etc/hosts
	I0927 18:38:50.578786   66146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.173	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:38:50.590083   66146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:38:50.724617   66146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:38:50.742942   66146 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892 for IP: 192.168.61.173
	I0927 18:38:50.742970   66146 certs.go:194] generating shared ca certs ...
	I0927 18:38:50.742990   66146 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:50.743206   66146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:38:50.743271   66146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:38:50.743284   66146 certs.go:256] generating profile certs ...
	I0927 18:38:50.743372   66146 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.key
	I0927 18:38:50.743403   66146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.crt with IP's: []
	I0927 18:38:51.030066   66146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.crt ...
	I0927 18:38:51.030101   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.crt: {Name:mk78cb22ebc4f0ffcf473fafedd3d4065f0b05d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:51.030287   66146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.key ...
	I0927 18:38:51.030298   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.key: {Name:mk39f4b1df3c870fb9b7d4764eee4266d47e86a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:51.030372   66146 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.key.827a0343
	I0927 18:38:51.030387   66146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.crt.827a0343 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.173]
	I0927 18:38:51.498780   66146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.crt.827a0343 ...
	I0927 18:38:51.498817   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.crt.827a0343: {Name:mkdd5d303409d8025c7ff54655857b24713d7af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:51.498982   66146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.key.827a0343 ...
	I0927 18:38:51.498996   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.key.827a0343: {Name:mkf8f0934c020ff26a8f07c56d0787379f28111d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:51.499067   66146 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.crt.827a0343 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.crt
	I0927 18:38:51.499135   66146 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.key.827a0343 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.key
	I0927 18:38:51.499196   66146 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.key
	I0927 18:38:51.499212   66146 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.crt with IP's: []
	I0927 18:38:51.639648   66146 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.crt ...
	I0927 18:38:51.639676   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.crt: {Name:mk54eb27b33018031cf4124b3f5af42813d88330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:51.639826   66146 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.key ...
	I0927 18:38:51.639836   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.key: {Name:mk1602cded26283d47ad7348a01c2e438e82779b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:51.639992   66146 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:38:51.640027   66146 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:38:51.640034   66146 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:38:51.640058   66146 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:38:51.640082   66146 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:38:51.640102   66146 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:38:51.640148   66146 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:38:51.640692   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:38:51.673321   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:38:51.705022   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:38:51.733515   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:38:51.760209   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 18:38:51.785208   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 18:38:51.809981   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:38:51.835151   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 18:38:51.859160   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:38:51.882190   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:38:51.906253   66146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:38:51.930604   66146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:38:51.948129   66146 ssh_runner.go:195] Run: openssl version
	I0927 18:38:51.953831   66146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:38:51.964927   66146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:38:51.969919   66146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:38:51.969997   66146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:38:51.975831   66146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:38:51.986731   66146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:38:51.997569   66146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:38:52.002154   66146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:38:52.002212   66146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:38:52.007868   66146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:38:52.018797   66146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:38:52.029925   66146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:38:52.034353   66146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:38:52.034430   66146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:38:52.040795   66146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:38:52.053527   66146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:38:52.057711   66146 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 18:38:52.057778   66146 kubeadm.go:392] StartCluster: {Name:calico-268892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:calico-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:38:52.057878   66146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:38:52.057943   66146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:38:52.095413   66146 cri.go:89] found id: ""
	I0927 18:38:52.095494   66146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 18:38:52.105318   66146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 18:38:52.114978   66146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 18:38:52.124484   66146 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 18:38:52.124510   66146 kubeadm.go:157] found existing configuration files:
	
	I0927 18:38:52.124563   66146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 18:38:52.135456   66146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 18:38:52.135535   66146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 18:38:52.146361   66146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 18:38:52.156260   66146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 18:38:52.156328   66146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 18:38:52.165474   66146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 18:38:52.174857   66146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 18:38:52.174933   66146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 18:38:52.184395   66146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 18:38:52.194481   66146 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 18:38:52.194543   66146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 18:38:52.204922   66146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 18:38:52.252527   66146 kubeadm.go:310] W0927 18:38:52.226924     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 18:38:52.253349   66146 kubeadm.go:310] W0927 18:38:52.227899     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 18:38:52.364739   66146 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 18:38:47.721192   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:47.721827   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:47.721853   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:47.721797   67736 retry.go:31] will retry after 598.27377ms: waiting for machine to come up
	I0927 18:38:48.321629   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:48.322375   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:48.322422   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:48.322337   67736 retry.go:31] will retry after 575.527661ms: waiting for machine to come up
	I0927 18:38:48.899125   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:48.899685   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:48.899710   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:48.899634   67736 retry.go:31] will retry after 937.533136ms: waiting for machine to come up
	I0927 18:38:49.839036   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:49.840591   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:49.840630   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:49.839628   67736 retry.go:31] will retry after 853.809798ms: waiting for machine to come up
	I0927 18:38:50.695803   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:50.696371   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:50.696398   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:50.696316   67736 retry.go:31] will retry after 1.429252446s: waiting for machine to come up
	I0927 18:38:52.127847   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:52.128409   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:52.128444   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:52.128369   67736 retry.go:31] will retry after 1.173273825s: waiting for machine to come up
	I0927 18:38:53.303347   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:53.304001   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:53.304026   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:53.303946   67736 retry.go:31] will retry after 1.813484817s: waiting for machine to come up
	I0927 18:38:55.118523   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:55.119053   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:55.119084   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:55.118995   67736 retry.go:31] will retry after 2.7423984s: waiting for machine to come up
	I0927 18:39:01.660419   66146 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 18:39:01.660500   66146 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 18:39:01.660600   66146 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 18:39:01.660746   66146 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 18:39:01.660885   66146 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 18:39:01.660965   66146 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 18:39:01.662924   66146 out.go:235]   - Generating certificates and keys ...
	I0927 18:39:01.662998   66146 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 18:39:01.663057   66146 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 18:39:01.663141   66146 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 18:39:01.663230   66146 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 18:39:01.663320   66146 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 18:39:01.663374   66146 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 18:39:01.663446   66146 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 18:39:01.663599   66146 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-268892 localhost] and IPs [192.168.61.173 127.0.0.1 ::1]
	I0927 18:39:01.663661   66146 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 18:39:01.663835   66146 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-268892 localhost] and IPs [192.168.61.173 127.0.0.1 ::1]
	I0927 18:39:01.663924   66146 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 18:39:01.664016   66146 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 18:39:01.664080   66146 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 18:39:01.664172   66146 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 18:39:01.664224   66146 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 18:39:01.664300   66146 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 18:39:01.664375   66146 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 18:39:01.664465   66146 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 18:39:01.664544   66146 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 18:39:01.664665   66146 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 18:39:01.664769   66146 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 18:39:01.666381   66146 out.go:235]   - Booting up control plane ...
	I0927 18:39:01.666487   66146 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 18:39:01.666585   66146 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 18:39:01.666686   66146 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 18:39:01.666779   66146 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 18:39:01.666887   66146 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 18:39:01.666938   66146 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 18:39:01.667045   66146 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 18:39:01.667199   66146 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 18:39:01.667299   66146 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.81381ms
	I0927 18:39:01.667403   66146 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 18:39:01.667497   66146 kubeadm.go:310] [api-check] The API server is healthy after 5.001282445s
	I0927 18:39:01.667620   66146 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 18:39:01.667744   66146 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 18:39:01.667810   66146 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 18:39:01.668041   66146 kubeadm.go:310] [mark-control-plane] Marking the node calico-268892 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 18:39:01.668128   66146 kubeadm.go:310] [bootstrap-token] Using token: wxvb1p.pvkaneuwc5veb8vj
	I0927 18:39:01.669440   66146 out.go:235]   - Configuring RBAC rules ...
	I0927 18:39:01.669548   66146 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 18:39:01.669650   66146 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 18:39:01.669852   66146 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 18:39:01.669964   66146 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 18:39:01.670106   66146 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 18:39:01.670215   66146 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 18:39:01.670323   66146 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 18:39:01.670362   66146 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 18:39:01.670402   66146 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 18:39:01.670409   66146 kubeadm.go:310] 
	I0927 18:39:01.670465   66146 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 18:39:01.670478   66146 kubeadm.go:310] 
	I0927 18:39:01.670562   66146 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 18:39:01.670572   66146 kubeadm.go:310] 
	I0927 18:39:01.670614   66146 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 18:39:01.670705   66146 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 18:39:01.670770   66146 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 18:39:01.670781   66146 kubeadm.go:310] 
	I0927 18:39:01.670823   66146 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 18:39:01.670829   66146 kubeadm.go:310] 
	I0927 18:39:01.670866   66146 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 18:39:01.670872   66146 kubeadm.go:310] 
	I0927 18:39:01.670911   66146 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 18:39:01.670990   66146 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 18:39:01.671048   66146 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 18:39:01.671056   66146 kubeadm.go:310] 
	I0927 18:39:01.671146   66146 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 18:39:01.671263   66146 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 18:39:01.671273   66146 kubeadm.go:310] 
	I0927 18:39:01.671400   66146 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wxvb1p.pvkaneuwc5veb8vj \
	I0927 18:39:01.671503   66146 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 \
	I0927 18:39:01.671542   66146 kubeadm.go:310] 	--control-plane 
	I0927 18:39:01.671549   66146 kubeadm.go:310] 
	I0927 18:39:01.671671   66146 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 18:39:01.671681   66146 kubeadm.go:310] 
	I0927 18:39:01.671799   66146 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wxvb1p.pvkaneuwc5veb8vj \
	I0927 18:39:01.671922   66146 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:57e8a3d2f956b4658647f4bb7f8e40a9b386167f829002db6a6fbca7e2193c93 
	I0927 18:39:01.671934   66146 cni.go:84] Creating CNI manager for "calico"
	I0927 18:39:01.673498   66146 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0927 18:39:01.674992   66146 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0927 18:39:01.675013   66146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0927 18:39:01.698523   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0927 18:38:57.864454   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:38:57.865407   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:38:57.865431   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:38:57.865360   67736 retry.go:31] will retry after 3.577839432s: waiting for machine to come up
	I0927 18:39:01.445877   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:01.446418   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:39:01.446439   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:39:01.446331   67736 retry.go:31] will retry after 3.183797302s: waiting for machine to come up
	I0927 18:39:03.056798   66146 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.358234786s)
	I0927 18:39:03.056861   66146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 18:39:03.056950   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:03.056978   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-268892 minikube.k8s.io/updated_at=2024_09_27T18_39_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c minikube.k8s.io/name=calico-268892 minikube.k8s.io/primary=true
	I0927 18:39:03.070240   66146 ops.go:34] apiserver oom_adj: -16
	I0927 18:39:03.182982   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:03.683419   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:04.183257   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:04.683344   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:05.183205   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:05.683296   66146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 18:39:05.766513   66146 kubeadm.go:1113] duration metric: took 2.709616258s to wait for elevateKubeSystemPrivileges
	I0927 18:39:05.766549   66146 kubeadm.go:394] duration metric: took 13.7087749s to StartCluster
	I0927 18:39:05.766574   66146 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:05.766681   66146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:39:05.767556   66146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:05.767812   66146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 18:39:05.767825   66146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 18:39:05.767802   66146 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.173 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 18:39:05.767903   66146 addons.go:69] Setting storage-provisioner=true in profile "calico-268892"
	I0927 18:39:05.767919   66146 addons.go:234] Setting addon storage-provisioner=true in "calico-268892"
	I0927 18:39:05.767927   66146 addons.go:69] Setting default-storageclass=true in profile "calico-268892"
	I0927 18:39:05.767950   66146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-268892"
	I0927 18:39:05.767952   66146 host.go:66] Checking if "calico-268892" exists ...
	I0927 18:39:05.768339   66146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:39:05.768367   66146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:39:05.768384   66146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:39:05.768386   66146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:39:05.768684   66146 config.go:182] Loaded profile config "calico-268892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:39:05.769731   66146 out.go:177] * Verifying Kubernetes components...
	I0927 18:39:05.771169   66146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:39:05.784293   66146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0927 18:39:05.784754   66146 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:39:05.785273   66146 main.go:141] libmachine: Using API Version  1
	I0927 18:39:05.785295   66146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:39:05.785644   66146 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:39:05.785842   66146 main.go:141] libmachine: (calico-268892) Calling .GetState
	I0927 18:39:05.789674   66146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0927 18:39:05.790109   66146 addons.go:234] Setting addon default-storageclass=true in "calico-268892"
	I0927 18:39:05.790163   66146 host.go:66] Checking if "calico-268892" exists ...
	I0927 18:39:05.790537   66146 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:39:05.791062   66146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:39:05.791112   66146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:39:05.791202   66146 main.go:141] libmachine: Using API Version  1
	I0927 18:39:05.791227   66146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:39:05.791675   66146 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:39:05.792215   66146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:39:05.792285   66146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:39:05.806065   66146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38797
	I0927 18:39:05.806565   66146 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:39:05.807107   66146 main.go:141] libmachine: Using API Version  1
	I0927 18:39:05.807128   66146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:39:05.807426   66146 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:39:05.808037   66146 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:39:05.808082   66146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:39:05.812731   66146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46669
	I0927 18:39:05.813154   66146 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:39:05.813607   66146 main.go:141] libmachine: Using API Version  1
	I0927 18:39:05.813632   66146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:39:05.814041   66146 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:39:05.814269   66146 main.go:141] libmachine: (calico-268892) Calling .GetState
	I0927 18:39:05.816286   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:39:05.818152   66146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 18:39:05.819695   66146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:39:05.819717   66146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 18:39:05.819738   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:39:05.823365   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:39:05.823905   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:39:05.823937   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:39:05.824141   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:39:05.824381   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:39:05.824612   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:39:05.824780   66146 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa Username:docker}
	I0927 18:39:05.825428   66146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
	I0927 18:39:05.825821   66146 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:39:05.826339   66146 main.go:141] libmachine: Using API Version  1
	I0927 18:39:05.826365   66146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:39:05.826747   66146 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:39:05.826967   66146 main.go:141] libmachine: (calico-268892) Calling .GetState
	I0927 18:39:05.828656   66146 main.go:141] libmachine: (calico-268892) Calling .DriverName
	I0927 18:39:05.828857   66146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 18:39:05.828875   66146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 18:39:05.828893   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHHostname
	I0927 18:39:05.832060   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:39:05.832449   66146 main.go:141] libmachine: (calico-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:ac:f6", ip: ""} in network mk-calico-268892: {Iface:virbr3 ExpiryTime:2024-09-27 19:38:37 +0000 UTC Type:0 Mac:52:54:00:1f:ac:f6 Iaid: IPaddr:192.168.61.173 Prefix:24 Hostname:calico-268892 Clientid:01:52:54:00:1f:ac:f6}
	I0927 18:39:05.832482   66146 main.go:141] libmachine: (calico-268892) DBG | domain calico-268892 has defined IP address 192.168.61.173 and MAC address 52:54:00:1f:ac:f6 in network mk-calico-268892
	I0927 18:39:05.832662   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHPort
	I0927 18:39:05.832839   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHKeyPath
	I0927 18:39:05.832998   66146 main.go:141] libmachine: (calico-268892) Calling .GetSSHUsername
	I0927 18:39:05.833144   66146 sshutil.go:53] new ssh client: &{IP:192.168.61.173 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/calico-268892/id_rsa Username:docker}
	I0927 18:39:06.007342   66146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:39:06.007398   66146 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 18:39:06.053344   66146 node_ready.go:35] waiting up to 15m0s for node "calico-268892" to be "Ready" ...
	I0927 18:39:06.134178   66146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 18:39:06.177489   66146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 18:39:06.652299   66146 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0927 18:39:06.889012   66146 main.go:141] libmachine: Making call to close driver server
	I0927 18:39:06.889033   66146 main.go:141] libmachine: Making call to close driver server
	I0927 18:39:06.889059   66146 main.go:141] libmachine: (calico-268892) Calling .Close
	I0927 18:39:06.889043   66146 main.go:141] libmachine: (calico-268892) Calling .Close
	I0927 18:39:06.889411   66146 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:39:06.889428   66146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:39:06.889439   66146 main.go:141] libmachine: Making call to close driver server
	I0927 18:39:06.889446   66146 main.go:141] libmachine: (calico-268892) Calling .Close
	I0927 18:39:06.889459   66146 main.go:141] libmachine: (calico-268892) DBG | Closing plugin on server side
	I0927 18:39:06.889459   66146 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:39:06.889475   66146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:39:06.889484   66146 main.go:141] libmachine: Making call to close driver server
	I0927 18:39:06.889510   66146 main.go:141] libmachine: (calico-268892) Calling .Close
	I0927 18:39:06.889646   66146 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:39:06.889670   66146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:39:06.889774   66146 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:39:06.889789   66146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:39:06.906706   66146 main.go:141] libmachine: Making call to close driver server
	I0927 18:39:06.906727   66146 main.go:141] libmachine: (calico-268892) Calling .Close
	I0927 18:39:06.907011   66146 main.go:141] libmachine: (calico-268892) DBG | Closing plugin on server side
	I0927 18:39:06.907047   66146 main.go:141] libmachine: Successfully made call to close driver server
	I0927 18:39:06.907054   66146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0927 18:39:06.909015   66146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0927 18:39:06.910481   66146 addons.go:510] duration metric: took 1.142647737s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0927 18:39:07.156231   66146 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-268892" context rescaled to 1 replicas
	I0927 18:39:04.633704   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:04.634144   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find current IP address of domain custom-flannel-268892 in network mk-custom-flannel-268892
	I0927 18:39:04.634173   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | I0927 18:39:04.634099   67736 retry.go:31] will retry after 4.73578444s: waiting for machine to come up
	I0927 18:39:08.057404   66146 node_ready.go:53] node "calico-268892" has status "Ready":"False"
	I0927 18:39:10.557051   66146 node_ready.go:53] node "calico-268892" has status "Ready":"False"
	I0927 18:39:09.371835   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:09.372457   67596 main.go:141] libmachine: (custom-flannel-268892) Found IP for machine: 192.168.39.123
	I0927 18:39:09.372487   67596 main.go:141] libmachine: (custom-flannel-268892) Reserving static IP address...
	I0927 18:39:09.372500   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has current primary IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:09.372808   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find host DHCP lease matching {name: "custom-flannel-268892", mac: "52:54:00:88:89:00", ip: "192.168.39.123"} in network mk-custom-flannel-268892
	I0927 18:39:09.456797   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Getting to WaitForSSH function...
	I0927 18:39:09.456832   67596 main.go:141] libmachine: (custom-flannel-268892) Reserved static IP address: 192.168.39.123
	I0927 18:39:09.456846   67596 main.go:141] libmachine: (custom-flannel-268892) Waiting for SSH to be available...
	I0927 18:39:09.460073   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:09.460525   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892
	I0927 18:39:09.460557   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | unable to find defined IP address of network mk-custom-flannel-268892 interface with MAC address 52:54:00:88:89:00
	I0927 18:39:09.460767   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Using SSH client type: external
	I0927 18:39:09.460792   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa (-rw-------)
	I0927 18:39:09.460843   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:39:09.460861   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | About to run SSH command:
	I0927 18:39:09.460876   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | exit 0
	I0927 18:39:09.464849   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | SSH cmd err, output: exit status 255: 
	I0927 18:39:09.464876   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0927 18:39:09.464886   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | command : exit 0
	I0927 18:39:09.464897   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | err     : exit status 255
	I0927 18:39:09.464908   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | output  : 
	I0927 18:39:12.467285   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Getting to WaitForSSH function...
	I0927 18:39:12.469840   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.470283   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:12.470305   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.470512   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Using SSH client type: external
	I0927 18:39:12.470559   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa (-rw-------)
	I0927 18:39:12.470599   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:39:12.470617   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | About to run SSH command:
	I0927 18:39:12.470634   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | exit 0
	I0927 18:39:12.598782   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | SSH cmd err, output: <nil>: 
	I0927 18:39:12.599061   67596 main.go:141] libmachine: (custom-flannel-268892) KVM machine creation complete!
	I0927 18:39:12.599426   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetConfigRaw
	I0927 18:39:12.600000   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:12.600222   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:12.600419   67596 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0927 18:39:12.600436   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetState
	I0927 18:39:12.601743   67596 main.go:141] libmachine: Detecting operating system of created instance...
	I0927 18:39:12.601761   67596 main.go:141] libmachine: Waiting for SSH to be available...
	I0927 18:39:12.601768   67596 main.go:141] libmachine: Getting to WaitForSSH function...
	I0927 18:39:12.601777   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:12.604563   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.604937   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:12.604965   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.605116   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:12.605313   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.605457   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.605595   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:12.605762   67596 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:12.606075   67596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0927 18:39:12.606096   67596 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0927 18:39:13.923650   67636 start.go:364] duration metric: took 36.037490266s to acquireMachinesLock for "kubernetes-upgrade-477684"
	I0927 18:39:13.923689   67636 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:39:13.923694   67636 fix.go:54] fixHost starting: 
	I0927 18:39:13.924127   67636 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:39:13.924179   67636 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:39:13.942506   67636 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38667
	I0927 18:39:13.943012   67636 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:39:13.943539   67636 main.go:141] libmachine: Using API Version  1
	I0927 18:39:13.943564   67636 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:39:13.943918   67636 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:39:13.944112   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:13.944316   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetState
	I0927 18:39:13.946035   67636 fix.go:112] recreateIfNeeded on kubernetes-upgrade-477684: state=Running err=<nil>
	W0927 18:39:13.946073   67636 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:39:13.948196   67636 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-477684" VM ...
	I0927 18:39:12.709988   67596 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:39:12.710009   67596 main.go:141] libmachine: Detecting the provisioner...
	I0927 18:39:12.710019   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:12.712939   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.713299   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:12.713326   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.713568   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:12.713772   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.713952   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.714156   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:12.714337   67596 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:12.714507   67596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0927 18:39:12.714517   67596 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0927 18:39:12.824322   67596 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0927 18:39:12.824414   67596 main.go:141] libmachine: found compatible host: buildroot
	I0927 18:39:12.824429   67596 main.go:141] libmachine: Provisioning with buildroot...
	I0927 18:39:12.824440   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetMachineName
	I0927 18:39:12.824727   67596 buildroot.go:166] provisioning hostname "custom-flannel-268892"
	I0927 18:39:12.824775   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetMachineName
	I0927 18:39:12.824991   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:12.828278   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.828723   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:12.828756   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.828869   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:12.829091   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.829273   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.829435   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:12.829687   67596 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:12.829960   67596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0927 18:39:12.829982   67596 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-268892 && echo "custom-flannel-268892" | sudo tee /etc/hostname
	I0927 18:39:12.957547   67596 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-268892
	
	I0927 18:39:12.957572   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:12.960289   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.960572   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:12.960609   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:12.960827   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:12.961038   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.961205   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:12.961332   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:12.961469   67596 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:12.961655   67596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0927 18:39:12.961678   67596 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-268892' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-268892/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-268892' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:39:13.075681   67596 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:39:13.075713   67596 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:39:13.075753   67596 buildroot.go:174] setting up certificates
	I0927 18:39:13.075767   67596 provision.go:84] configureAuth start
	I0927 18:39:13.075788   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetMachineName
	I0927 18:39:13.076149   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetIP
	I0927 18:39:13.079031   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.079458   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.079487   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.079590   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.081972   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.082279   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.082302   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.082482   67596 provision.go:143] copyHostCerts
	I0927 18:39:13.082526   67596 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:39:13.082536   67596 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:39:13.082587   67596 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:39:13.082697   67596 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:39:13.082706   67596 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:39:13.082728   67596 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:39:13.082792   67596 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:39:13.082799   67596 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:39:13.082816   67596 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:39:13.082860   67596 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-268892 san=[127.0.0.1 192.168.39.123 custom-flannel-268892 localhost minikube]
	I0927 18:39:13.197210   67596 provision.go:177] copyRemoteCerts
	I0927 18:39:13.197300   67596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:39:13.197335   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.200213   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.200600   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.200629   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.200898   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:13.201099   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.201307   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:13.201482   67596 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa Username:docker}
	I0927 18:39:13.286965   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:39:13.319376   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0927 18:39:13.345683   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0927 18:39:13.377676   67596 provision.go:87] duration metric: took 301.889482ms to configureAuth
	I0927 18:39:13.377710   67596 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:39:13.377942   67596 config.go:182] Loaded profile config "custom-flannel-268892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:39:13.378049   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.381258   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.381719   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.381748   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.381971   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:13.382157   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.382316   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.382519   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:13.382750   67596 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:13.382936   67596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0927 18:39:13.382955   67596 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:39:13.667062   67596 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:39:13.667090   67596 main.go:141] libmachine: Checking connection to Docker...
	I0927 18:39:13.667100   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetURL
	I0927 18:39:13.668514   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | Using libvirt version 6000000
	I0927 18:39:13.671090   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.671489   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.671538   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.671704   67596 main.go:141] libmachine: Docker is up and running!
	I0927 18:39:13.671720   67596 main.go:141] libmachine: Reticulating splines...
	I0927 18:39:13.671728   67596 client.go:171] duration metric: took 28.834725033s to LocalClient.Create
	I0927 18:39:13.671754   67596 start.go:167] duration metric: took 28.834816254s to libmachine.API.Create "custom-flannel-268892"
	I0927 18:39:13.671767   67596 start.go:293] postStartSetup for "custom-flannel-268892" (driver="kvm2")
	I0927 18:39:13.671779   67596 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:39:13.671800   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:13.672099   67596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:39:13.672127   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.674491   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.674826   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.674850   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.675034   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:13.675233   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.675424   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:13.675626   67596 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa Username:docker}
	I0927 18:39:13.765836   67596 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:39:13.770235   67596 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:39:13.770285   67596 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:39:13.770363   67596 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:39:13.770461   67596 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:39:13.770589   67596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:39:13.780847   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:39:13.806989   67596 start.go:296] duration metric: took 135.208592ms for postStartSetup
	I0927 18:39:13.807045   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetConfigRaw
	I0927 18:39:13.807693   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetIP
	I0927 18:39:13.810536   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.810960   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.810994   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.811334   67596 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/config.json ...
	I0927 18:39:13.811528   67596 start.go:128] duration metric: took 28.996197702s to createHost
	I0927 18:39:13.811551   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.813702   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.814085   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.814123   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.814278   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:13.814465   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.814630   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.814784   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:13.815011   67596 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:13.815236   67596 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0927 18:39:13.815252   67596 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:39:13.923495   67596 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727462353.907258999
	
	I0927 18:39:13.923519   67596 fix.go:216] guest clock: 1727462353.907258999
	I0927 18:39:13.923526   67596 fix.go:229] Guest: 2024-09-27 18:39:13.907258999 +0000 UTC Remote: 2024-09-27 18:39:13.811539742 +0000 UTC m=+36.205673580 (delta=95.719257ms)
	I0927 18:39:13.923558   67596 fix.go:200] guest clock delta is within tolerance: 95.719257ms
	I0927 18:39:13.923565   67596 start.go:83] releasing machines lock for "custom-flannel-268892", held for 29.108406664s
	I0927 18:39:13.923593   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:13.923930   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetIP
	I0927 18:39:13.927027   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.927426   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.927457   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.927635   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:13.928150   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:13.928317   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .DriverName
	I0927 18:39:13.928427   67596 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:39:13.928465   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.928535   67596 ssh_runner.go:195] Run: cat /version.json
	I0927 18:39:13.928561   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHHostname
	I0927 18:39:13.931642   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.932098   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.932125   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.932145   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.932319   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:13.932515   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.932625   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:13.932656   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:13.932669   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:13.932845   67596 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa Username:docker}
	I0927 18:39:13.932935   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHPort
	I0927 18:39:13.933155   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHKeyPath
	I0927 18:39:13.933341   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetSSHUsername
	I0927 18:39:13.933493   67596 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/custom-flannel-268892/id_rsa Username:docker}
	I0927 18:39:14.053178   67596 ssh_runner.go:195] Run: systemctl --version
	I0927 18:39:14.060165   67596 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:39:14.228289   67596 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 18:39:14.235818   67596 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:39:14.235897   67596 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:39:14.256541   67596 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0927 18:39:14.256567   67596 start.go:495] detecting cgroup driver to use...
	I0927 18:39:14.256647   67596 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:39:14.278015   67596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:39:14.293994   67596 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:39:14.294073   67596 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:39:14.308780   67596 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:39:14.323683   67596 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:39:14.449634   67596 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:39:14.624164   67596 docker.go:233] disabling docker service ...
	I0927 18:39:14.624234   67596 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:39:14.643538   67596 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:39:14.661249   67596 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:39:14.835640   67596 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:39:14.963454   67596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:39:14.977762   67596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:39:14.996208   67596 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 18:39:14.996266   67596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.006374   67596 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:39:15.006436   67596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.016705   67596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.026685   67596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.037658   67596 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:39:15.048520   67596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.059402   67596 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.077068   67596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:15.089166   67596 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:39:15.099077   67596 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0927 18:39:15.099130   67596 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0927 18:39:15.112056   67596 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:39:15.121874   67596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:39:15.251648   67596 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:39:15.350761   67596 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:39:15.350838   67596 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:39:15.355502   67596 start.go:563] Will wait 60s for crictl version
	I0927 18:39:15.355652   67596 ssh_runner.go:195] Run: which crictl
	I0927 18:39:15.359432   67596 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:39:15.400063   67596 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:39:15.400156   67596 ssh_runner.go:195] Run: crio --version
	I0927 18:39:15.432624   67596 ssh_runner.go:195] Run: crio --version
	I0927 18:39:15.465708   67596 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 18:39:12.623397   66146 node_ready.go:53] node "calico-268892" has status "Ready":"False"
	I0927 18:39:14.558136   66146 node_ready.go:49] node "calico-268892" has status "Ready":"True"
	I0927 18:39:14.558162   66146 node_ready.go:38] duration metric: took 8.504780642s for node "calico-268892" to be "Ready" ...
	I0927 18:39:14.558173   66146 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:39:14.575959   66146 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace to be "Ready" ...
	I0927 18:39:16.608679   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:15.467014   67596 main.go:141] libmachine: (custom-flannel-268892) Calling .GetIP
	I0927 18:39:15.469689   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:15.470030   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:89:00", ip: ""} in network mk-custom-flannel-268892: {Iface:virbr1 ExpiryTime:2024-09-27 19:39:00 +0000 UTC Type:0 Mac:52:54:00:88:89:00 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:custom-flannel-268892 Clientid:01:52:54:00:88:89:00}
	I0927 18:39:15.470062   67596 main.go:141] libmachine: (custom-flannel-268892) DBG | domain custom-flannel-268892 has defined IP address 192.168.39.123 and MAC address 52:54:00:88:89:00 in network mk-custom-flannel-268892
	I0927 18:39:15.470306   67596 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0927 18:39:15.474550   67596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:39:15.488254   67596 kubeadm.go:883] updating cluster {Name:custom-flannel-268892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.31.1 ClusterName:custom-flannel-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:39:15.488384   67596 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:39:15.488456   67596 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:39:15.521895   67596 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0927 18:39:15.521957   67596 ssh_runner.go:195] Run: which lz4
	I0927 18:39:15.525777   67596 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0927 18:39:15.529772   67596 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0927 18:39:15.529808   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0927 18:39:17.100533   67596 crio.go:462] duration metric: took 1.574779843s to copy over tarball
	I0927 18:39:17.100614   67596 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0927 18:39:13.949670   67636 machine.go:93] provisionDockerMachine start ...
	I0927 18:39:13.949704   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:13.949922   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:13.952562   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:13.952975   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:13.953012   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:13.953179   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:13.953392   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:13.953571   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:13.953702   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:13.953860   67636 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:13.954103   67636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:39:13.954116   67636 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:39:14.066791   67636 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-477684
	
	I0927 18:39:14.066818   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:39:14.067077   67636 buildroot.go:166] provisioning hostname "kubernetes-upgrade-477684"
	I0927 18:39:14.067108   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:39:14.067301   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:14.070241   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.070619   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:14.070675   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.070805   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:14.070974   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.071159   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.071306   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:14.071487   67636 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:14.071690   67636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:39:14.071705   67636 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-477684 && echo "kubernetes-upgrade-477684" | sudo tee /etc/hostname
	I0927 18:39:14.206711   67636 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-477684
	
	I0927 18:39:14.206747   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:14.209839   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.210410   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:14.210438   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.210720   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:14.210972   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.211196   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.211432   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:14.211636   67636 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:14.211848   67636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:39:14.211874   67636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-477684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-477684/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-477684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:39:14.332215   67636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:39:14.332247   67636 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:39:14.332277   67636 buildroot.go:174] setting up certificates
	I0927 18:39:14.332290   67636 provision.go:84] configureAuth start
	I0927 18:39:14.332309   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:39:14.332621   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:39:14.335621   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.335978   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:14.336000   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.336248   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:14.338758   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.339134   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:14.339162   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.339295   67636 provision.go:143] copyHostCerts
	I0927 18:39:14.339359   67636 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:39:14.339369   67636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:39:14.339422   67636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:39:14.339530   67636 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:39:14.339540   67636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:39:14.339562   67636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:39:14.339614   67636 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:39:14.339625   67636 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:39:14.339642   67636 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:39:14.339686   67636 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-477684 san=[127.0.0.1 192.168.50.36 kubernetes-upgrade-477684 localhost minikube]
	I0927 18:39:14.423090   67636 provision.go:177] copyRemoteCerts
	I0927 18:39:14.423158   67636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:39:14.423182   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:14.426154   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.426684   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:14.426715   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.426932   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:14.427110   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.427264   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:14.427421   67636 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:39:14.522580   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:39:14.549060   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0927 18:39:14.583108   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 18:39:14.612361   67636 provision.go:87] duration metric: took 280.053967ms to configureAuth
	I0927 18:39:14.612399   67636 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:39:14.612632   67636 config.go:182] Loaded profile config "kubernetes-upgrade-477684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:39:14.612744   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:14.616489   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.616924   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:14.616967   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:14.617222   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:14.617453   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.617612   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:14.617733   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:14.617908   67636 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:14.618135   67636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:39:14.618156   67636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:39:19.082534   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:21.173863   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:22.273723   67596 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.173077199s)
	I0927 18:39:22.273761   67596 crio.go:469] duration metric: took 5.173194915s to extract the tarball
	I0927 18:39:22.273772   67596 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0927 18:39:22.316498   67596 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:39:22.376245   67596 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:39:22.376282   67596 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:39:22.376294   67596 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.31.1 crio true true} ...
	I0927 18:39:22.376444   67596 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-268892 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I0927 18:39:22.376542   67596 ssh_runner.go:195] Run: crio config
	I0927 18:39:22.456258   67596 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0927 18:39:22.456303   67596 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:39:22.456334   67596 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-268892 NodeName:custom-flannel-268892 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:39:22.456519   67596 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-268892"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:39:22.456578   67596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:39:22.468874   67596 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:39:22.468961   67596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:39:22.484245   67596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0927 18:39:22.506805   67596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:39:22.528952   67596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0927 18:39:22.551118   67596 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0927 18:39:22.555780   67596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0927 18:39:22.570465   67596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:39:21.309513   67636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:39:21.309557   67636 machine.go:96] duration metric: took 7.359868621s to provisionDockerMachine
	I0927 18:39:21.309580   67636 start.go:293] postStartSetup for "kubernetes-upgrade-477684" (driver="kvm2")
	I0927 18:39:21.309701   67636 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:39:21.309742   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:21.310799   67636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:39:21.310859   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:21.317264   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.318009   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:21.318051   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.318330   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:21.318737   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:21.318999   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:21.319187   67636 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:39:21.417507   67636 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:39:21.423806   67636 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:39:21.423835   67636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:39:21.423926   67636 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:39:21.424032   67636 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:39:21.424204   67636 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:39:21.434125   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:39:21.460652   67636 start.go:296] duration metric: took 151.049963ms for postStartSetup
	I0927 18:39:21.460704   67636 fix.go:56] duration metric: took 7.537008186s for fixHost
	I0927 18:39:21.460732   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:21.464084   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.464514   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:21.464550   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.464647   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:21.464857   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:21.465038   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:21.465227   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:21.465381   67636 main.go:141] libmachine: Using SSH client type: native
	I0927 18:39:21.465587   67636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:39:21.465600   67636 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:39:21.584952   67636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727462361.573381192
	
	I0927 18:39:21.584982   67636 fix.go:216] guest clock: 1727462361.573381192
	I0927 18:39:21.584992   67636 fix.go:229] Guest: 2024-09-27 18:39:21.573381192 +0000 UTC Remote: 2024-09-27 18:39:21.460711128 +0000 UTC m=+43.731386792 (delta=112.670064ms)
	I0927 18:39:21.585020   67636 fix.go:200] guest clock delta is within tolerance: 112.670064ms
	I0927 18:39:21.585028   67636 start.go:83] releasing machines lock for "kubernetes-upgrade-477684", held for 7.661355993s
	I0927 18:39:21.585064   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:21.585382   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:39:21.588974   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.589406   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:21.589439   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.589638   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:21.590249   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:21.590454   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:39:21.590567   67636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:39:21.590609   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:21.590730   67636 ssh_runner.go:195] Run: cat /version.json
	I0927 18:39:21.590753   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:39:21.593641   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.594096   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:21.594150   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.594293   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:21.594446   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.594482   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:21.594729   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:21.594864   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:21.594886   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:21.594883   67636 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:39:21.595461   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:39:21.595658   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:39:21.595802   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:39:21.595901   67636 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:39:21.721072   67636 ssh_runner.go:195] Run: systemctl --version
	I0927 18:39:21.728487   67636 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:39:21.945620   67636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 18:39:21.970471   67636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:39:21.970584   67636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:39:22.010171   67636 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 18:39:22.010201   67636 start.go:495] detecting cgroup driver to use...
	I0927 18:39:22.010276   67636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:39:22.038427   67636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:39:22.066572   67636 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:39:22.066678   67636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:39:22.087127   67636 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:39:22.102942   67636 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:39:22.350488   67636 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:39:22.714862   67636 docker.go:233] disabling docker service ...
	I0927 18:39:22.714952   67636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:39:22.747982   67596 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:39:22.779730   67596 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892 for IP: 192.168.39.123
	I0927 18:39:22.779767   67596 certs.go:194] generating shared ca certs ...
	I0927 18:39:22.779804   67596 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:22.780026   67596 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:39:22.780159   67596 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:39:22.780226   67596 certs.go:256] generating profile certs ...
	I0927 18:39:22.780349   67596 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.key
	I0927 18:39:22.780399   67596 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.crt with IP's: []
	I0927 18:39:23.097491   67596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.crt ...
	I0927 18:39:23.097522   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.crt: {Name:mk88f31ed4f690e982e4e7d11cd544409b60b033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:23.097821   67596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.key ...
	I0927 18:39:23.097841   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.key: {Name:mk2ccf833af5975b1a68649b79a8b0378e541331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:23.098006   67596 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.key.e68a5842
	I0927 18:39:23.098029   67596 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.crt.e68a5842 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I0927 18:39:23.324697   67596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.crt.e68a5842 ...
	I0927 18:39:23.324753   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.crt.e68a5842: {Name:mk6847fa01084ed6470c162ffd7574a1b542ce02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:23.325076   67596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.key.e68a5842 ...
	I0927 18:39:23.325110   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.key.e68a5842: {Name:mkf667154cbd039c774e48c8b945b5b6f2c1bc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:23.325257   67596 certs.go:381] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.crt.e68a5842 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.crt
	I0927 18:39:23.325420   67596 certs.go:385] copying /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.key.e68a5842 -> /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.key
	I0927 18:39:23.325517   67596 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.key
	I0927 18:39:23.325542   67596 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.crt with IP's: []
	I0927 18:39:23.452708   67596 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.crt ...
	I0927 18:39:23.452740   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.crt: {Name:mk0842002499ee88f9ab5884e47b313fa96fb41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:23.488138   67596 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.key ...
	I0927 18:39:23.488184   67596 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.key: {Name:mk669be92e0ddd443cf39f0acb642b97dc7f7a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:23.488500   67596 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:39:23.488544   67596 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:39:23.488553   67596 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:39:23.488648   67596 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:39:23.488685   67596 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:39:23.488716   67596 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:39:23.488781   67596 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:39:23.489744   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:39:23.525245   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:39:23.561323   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:39:23.607148   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:39:23.645347   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0927 18:39:23.678049   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 18:39:23.710799   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:39:23.741890   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 18:39:23.777089   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:39:23.807983   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:39:23.849982   67596 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:39:23.883844   67596 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:39:23.909606   67596 ssh_runner.go:195] Run: openssl version
	I0927 18:39:23.916314   67596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:39:23.928883   67596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:39:23.935254   67596 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:39:23.935374   67596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:39:23.944100   67596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:39:23.956584   67596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:39:23.969293   67596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:39:23.975936   67596 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:39:23.976017   67596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:39:23.983758   67596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:39:23.999283   67596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:39:24.011617   67596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:39:24.018855   67596 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:39:24.018933   67596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:39:24.035437   67596 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:39:24.051766   67596 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:39:24.057698   67596 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 18:39:24.057761   67596 kubeadm.go:392] StartCluster: {Name:custom-flannel-268892 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:custom-flannel-268892 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:39:24.057877   67596 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:39:24.057981   67596 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:39:24.115679   67596 cri.go:89] found id: ""
	I0927 18:39:24.115798   67596 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 18:39:24.130561   67596 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 18:39:24.145824   67596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 18:39:24.160340   67596 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 18:39:24.160366   67596 kubeadm.go:157] found existing configuration files:
	
	I0927 18:39:24.160420   67596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 18:39:24.174456   67596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 18:39:24.174526   67596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 18:39:24.188575   67596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 18:39:24.211106   67596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 18:39:24.211178   67596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 18:39:24.226298   67596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 18:39:24.250025   67596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 18:39:24.250290   67596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 18:39:24.282495   67596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 18:39:24.297335   67596 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 18:39:24.297406   67596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 18:39:24.310750   67596 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 18:39:24.387497   67596 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 18:39:24.387690   67596 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 18:39:24.496550   67596 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 18:39:24.496705   67596 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 18:39:24.496829   67596 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 18:39:24.504937   67596 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 18:39:22.898069   67636 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:39:23.055104   67636 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:39:23.405195   67636 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:39:23.782312   67636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:39:23.803646   67636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:39:23.839940   67636 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 18:39:23.840101   67636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:23.858071   67636 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:39:23.858158   67636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:23.878522   67636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:23.897571   67636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:23.925905   67636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:39:23.993784   67636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:24.041091   67636 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:24.110078   67636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:39:24.153086   67636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:39:24.203787   67636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:39:24.292151   67636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:39:24.581112   67636 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:39:26.095373   67636 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.514222929s)
	I0927 18:39:26.095407   67636 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:39:26.095508   67636 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:39:26.103534   67636 start.go:563] Will wait 60s for crictl version
	I0927 18:39:26.103602   67636 ssh_runner.go:195] Run: which crictl
	I0927 18:39:26.107233   67636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:39:26.142304   67636 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:39:26.142433   67636 ssh_runner.go:195] Run: crio --version
	I0927 18:39:26.171799   67636 ssh_runner.go:195] Run: crio --version
	I0927 18:39:26.205166   67636 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 18:39:23.584159   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:26.083108   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:24.507627   67596 out.go:235]   - Generating certificates and keys ...
	I0927 18:39:24.507763   67596 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 18:39:24.507850   67596 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 18:39:24.931849   67596 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 18:39:25.033746   67596 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 18:39:25.373615   67596 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 18:39:25.544064   67596 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 18:39:25.646003   67596 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 18:39:25.646215   67596 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-268892 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0927 18:39:25.795570   67596 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 18:39:25.795884   67596 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-268892 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0927 18:39:25.946324   67596 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 18:39:26.662178   67596 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 18:39:26.931097   67596 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 18:39:26.931376   67596 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 18:39:26.997480   67596 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 18:39:27.115636   67596 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 18:39:27.210446   67596 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 18:39:27.276921   67596 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 18:39:27.440797   67596 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 18:39:27.441378   67596 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 18:39:27.444226   67596 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 18:39:27.446462   67596 out.go:235]   - Booting up control plane ...
	I0927 18:39:27.446605   67596 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 18:39:27.446779   67596 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 18:39:27.449417   67596 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 18:39:27.468334   67596 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 18:39:27.476063   67596 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 18:39:27.476135   67596 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 18:39:27.616313   67596 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 18:39:27.616519   67596 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 18:39:26.206384   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:39:26.209876   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:26.210397   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:39:26.210434   67636 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:39:26.210773   67636 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0927 18:39:26.215760   67636 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:39:26.215910   67636 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:39:26.215970   67636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:39:26.269285   67636 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:39:26.269311   67636 crio.go:433] Images already preloaded, skipping extraction
	I0927 18:39:26.269366   67636 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:39:26.411697   67636 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:39:26.411824   67636 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:39:26.411872   67636 kubeadm.go:934] updating node { 192.168.50.36 8443 v1.31.1 crio true true} ...
	I0927 18:39:26.412165   67636 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-477684 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:39:26.412329   67636 ssh_runner.go:195] Run: crio config
	I0927 18:39:26.689037   67636 cni.go:84] Creating CNI manager for ""
	I0927 18:39:26.689071   67636 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:39:26.689082   67636 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:39:26.689124   67636 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-477684 NodeName:kubernetes-upgrade-477684 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:39:26.689377   67636 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-477684"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.36
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:39:26.689487   67636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:39:26.717885   67636 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:39:26.717979   67636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:39:26.786034   67636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0927 18:39:26.928292   67636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:39:26.965007   67636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0927 18:39:27.032793   67636 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I0927 18:39:27.039736   67636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:39:27.297999   67636 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:39:27.324239   67636 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684 for IP: 192.168.50.36
	I0927 18:39:27.324268   67636 certs.go:194] generating shared ca certs ...
	I0927 18:39:27.324290   67636 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:39:27.324559   67636 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:39:27.324632   67636 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:39:27.324647   67636 certs.go:256] generating profile certs ...
	I0927 18:39:27.324778   67636 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/client.key
	I0927 18:39:27.324852   67636 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key.e0436798
	I0927 18:39:27.324910   67636 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.key
	I0927 18:39:27.325077   67636 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:39:27.325130   67636 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:39:27.325144   67636 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:39:27.325188   67636 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:39:27.325223   67636 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:39:27.325263   67636 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:39:27.325346   67636 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:39:27.330260   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:39:27.415076   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:39:27.481395   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:39:27.527038   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:39:27.585224   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0927 18:39:27.631695   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 18:39:27.670278   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:39:27.711334   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 18:39:27.744715   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:39:27.769082   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:39:28.085143   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:30.584762   66146 pod_ready.go:103] pod "calico-kube-controllers-b8d8894fb-z9bb6" in "kube-system" namespace has status "Ready":"False"
	I0927 18:39:28.117395   67596 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.631141ms
	I0927 18:39:28.117533   67596 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 18:39:27.793394   67636 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:39:27.827374   67636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:39:27.879695   67636 ssh_runner.go:195] Run: openssl version
	I0927 18:39:27.885839   67636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:39:27.896781   67636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:39:27.901558   67636 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:39:27.901629   67636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:39:27.907966   67636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:39:27.921938   67636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:39:27.936703   67636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:39:27.943468   67636 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:39:27.943536   67636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:39:27.951645   67636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:39:27.963284   67636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:39:27.975256   67636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:39:27.981611   67636 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:39:27.981678   67636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:39:27.989336   67636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:39:28.002143   67636 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:39:28.007359   67636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 18:39:28.014631   67636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 18:39:28.022975   67636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 18:39:28.031600   67636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 18:39:28.039917   67636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 18:39:28.048363   67636 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 18:39:28.057638   67636 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:39:28.057751   67636 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:39:28.057842   67636 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:39:28.117367   67636 cri.go:89] found id: "3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47"
	I0927 18:39:28.117399   67636 cri.go:89] found id: "fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1"
	I0927 18:39:28.117405   67636 cri.go:89] found id: "8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a"
	I0927 18:39:28.117409   67636 cri.go:89] found id: "cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1"
	I0927 18:39:28.117414   67636 cri.go:89] found id: "39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2"
	I0927 18:39:28.117419   67636 cri.go:89] found id: "d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548"
	I0927 18:39:28.117423   67636 cri.go:89] found id: "f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352"
	I0927 18:39:28.117427   67636 cri.go:89] found id: "46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5"
	I0927 18:39:28.117431   67636 cri.go:89] found id: ""
	I0927 18:39:28.117481   67636 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.911322238Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d488829d-e085-4680-9c72-3000efb664b1 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.912885466Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83e5dca6-a94d-4c95-96b6-9fae4866957c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.913271514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462377913247638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83e5dca6-a94d-4c95-96b6-9fae4866957c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.913919900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd8d40f2-2ae3-4368-b8cd-43d271cd5a20 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.913998241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd8d40f2-2ae3-4368-b8cd-43d271cd5a20 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.915029232Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63c992ccc7bba545584634e00ed0d0698b75c7af484db9be2992faaceb62a2f1,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727462374087013612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2e2289b3e8353ee8e8b0409bd3520aa85024e9fc6e928d1fa6bdff5dcc33c7,PodSandboxId:807c3ffb4a233c760da31b8f70bb46db48a1ffb1093aa501e3c00df80f47c226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462374066223307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83815877faed962ab68b30e231705593eb081f15fd528a5f996216830ca48a47,PodSandboxId:a2b23a65cb1b97f649d7eefd2f6b78e7a158602f18fe90619c4a563a979d27a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374096673688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2825128f60ed5a4a8a6b2e596a995b953f46aabfd7c61b244b774dfdca2ad37b,PodSandboxId:273cb3712f326d81481e696de91fec0faada3dbfec55ee7eee6bfbdc74debd01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374076768402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c
4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b328d54d7eb3722d57a33d292bcbcc31ee1af6d1b487a44fa540b1931d2b8,PodSandboxId:c4c2844f6ad8993188de8492efd7e818336693436a07b0d5ff6e9c41332ad4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462370197237194,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62d411f55ab38d9bf816369e92861ccd13673ff16c0a72c8077fc1c4b120453,PodSandboxId:2ba7c2dcec748f1980469668d3b015c08cf29b85be74d2b3c2ddc2d3ead065d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462370222440812,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d6edc83afddc5c6b8ce6e44f036eb49fdaa4e8ce06b739b2a18f20ee1a6ffc,PodSandboxId:92977a7bc7e888ad02ff6f959592ab496a48896e20d93b38acc8c24bd02e449f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462370208860052,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3feb6a79a3350f141a40e0475c6a639c2fbd467b56104e696bd374449e32547,PodSandboxId:34525136e319d3d7fda2ed1b430e5fa802147de75331e500a9ef5f283351c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462370181406697,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727462367176296629,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548,PodSandboxId:622817cb361e116ae72356f438ef4de9c6266dc3385e71aa8736abed69662068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462363116138727,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1,PodSandboxId:c6b1a2bad1f5995428a2bf87c9d8661db282fb753329c59ac8dac2e6bb4f7f62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462364053542371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a,PodSandboxId:7f1d83a766fc1b3bf9ddd8e91690403e2a181d7b5588c5afbfa80ef74d826521,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462363989484450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2,PodSandboxId:fe26e10e875de18d62fbd6252bd0c9c3ea3c6152f39030bb3080f1a427ed4422,Metadata:&ContainerMetadata{
Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462363268390092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1,PodSandboxId:c406aba7c6c9caffa1f11d42e5ca122d51bd2e4673c1f736eb1db05f41f3228f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462363285651561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352,PodSandboxId:3151b9fd4c7d52438545fb0f12582c7df6f11e85bafd778f5763725b1fa32c99,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462362923927702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5,PodSandboxId:42ee4cff018d84e9ded2ef2db042d87d4d2dbab84d591e4442d197df5f76785e,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462362711763921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd8d40f2-2ae3-4368-b8cd-43d271cd5a20 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.950249102Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c4b042a8-869b-446a-be7c-8bd35a41578f name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.950627689Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:273cb3712f326d81481e696de91fec0faada3dbfec55ee7eee6bfbdc74debd01,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tljn5,Uid:096154c3-c5ec-441c-b1bd-5c4fb95f211c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462367242021106,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c4fb95f211c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:38:40.180653798Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2b23a65cb1b97f649d7eefd2f6b78e7a158602f18fe90619c4a563a979d27a8,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dkpwl,Uid:83b9300d-d68b-4c85-941b-0fdbd39d4ccf,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462367180942226,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:38:40.158919811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4b735f47-d670-4415-b040-37fa30dfc415,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462366735692295,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-27T18:38:39.372885032Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ba7c2dcec748f1980469668d3b015c08cf29b85be74d2b3c2ddc2d3ead065d7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-477684,Uid:68df2
1b27abba208f190d6ffbb0fc52e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462366717410528,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 68df21b27abba208f190d6ffbb0fc52e,kubernetes.io/config.seen: 2024-09-27T18:38:26.500552983Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:807c3ffb4a233c760da31b8f70bb46db48a1ffb1093aa501e3c00df80f47c226,Metadata:&PodSandboxMetadata{Name:kube-proxy-76w2d,Uid:e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462366660958033,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:38:40.024322442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92977a7bc7e888ad02ff6f959592ab496a48896e20d93b38acc8c24bd02e449f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-477684,Uid:a1c7ac2259f9a261bb6d913d95acba2a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462366648567306,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a1c7ac2259f9a261bb6d913d95acba2a,kubernetes.io/config.seen: 2024-09-27T18:38:26.500557043Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3
4525136e319d3d7fda2ed1b430e5fa802147de75331e500a9ef5f283351c26e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-477684,Uid:65f7289bc4d268407b252bcf7901d567,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1727462366352011929,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.36:8443,kubernetes.io/config.hash: 65f7289bc4d268407b252bcf7901d567,kubernetes.io/config.seen: 2024-09-27T18:38:26.500558082Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4c2844f6ad8993188de8492efd7e818336693436a07b0d5ff6e9c41332ad4fa,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-477684,Uid:19384059162ea6d3bb38cb3aac20162d,Namespace:kube-system,Attem
pt:2,},State:SANDBOX_READY,CreatedAt:1727462366335383437,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.36:2379,kubernetes.io/config.hash: 19384059162ea6d3bb38cb3aac20162d,kubernetes.io/config.seen: 2024-09-27T18:38:26.532428732Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:622817cb361e116ae72356f438ef4de9c6266dc3385e71aa8736abed69662068,Metadata:&PodSandboxMetadata{Name:kube-proxy-76w2d,Uid:e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362432175997,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:38:40.024322442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c406aba7c6c9caffa1f11d42e5ca122d51bd2e4673c1f736eb1db05f41f3228f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-477684,Uid:a1c7ac2259f9a261bb6d913d95acba2a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362390098099,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a1c7ac2259f9a261bb6d913d95acba2a,kubernetes.io/config.seen: 2024-09-27T18:38:26.500557043Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3151b9fd4c7
d52438545fb0f12582c7df6f11e85bafd778f5763725b1fa32c99,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-477684,Uid:68df21b27abba208f190d6ffbb0fc52e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362353408549,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 68df21b27abba208f190d6ffbb0fc52e,kubernetes.io/config.seen: 2024-09-27T18:38:26.500552983Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7f1d83a766fc1b3bf9ddd8e91690403e2a181d7b5588c5afbfa80ef74d826521,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tljn5,Uid:096154c3-c5ec-441c-b1bd-5c4fb95f211c,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362299786043,Labels
:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c4fb95f211c,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:38:40.180653798Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe26e10e875de18d62fbd6252bd0c9c3ea3c6152f39030bb3080f1a427ed4422,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-477684,Uid:65f7289bc4d268407b252bcf7901d567,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362269691986,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-addre
ss.endpoint: 192.168.50.36:8443,kubernetes.io/config.hash: 65f7289bc4d268407b252bcf7901d567,kubernetes.io/config.seen: 2024-09-27T18:38:26.500558082Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c6b1a2bad1f5995428a2bf87c9d8661db282fb753329c59ac8dac2e6bb4f7f62,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dkpwl,Uid:83b9300d-d68b-4c85-941b-0fdbd39d4ccf,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362246685097,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-27T18:38:40.158919811Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42ee4cff018d84e9ded2ef2db042d87d4d2dbab84d591e4442d197df5f76785e,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-477684,Uid:1938
4059162ea6d3bb38cb3aac20162d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1727462362225691592,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.36:2379,kubernetes.io/config.hash: 19384059162ea6d3bb38cb3aac20162d,kubernetes.io/config.seen: 2024-09-27T18:38:26.532428732Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c4b042a8-869b-446a-be7c-8bd35a41578f name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.951806859Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c4e7a4e-4a31-49df-98a4-3fb31278e02d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.951898815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c4e7a4e-4a31-49df-98a4-3fb31278e02d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.952409651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63c992ccc7bba545584634e00ed0d0698b75c7af484db9be2992faaceb62a2f1,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727462374087013612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2e2289b3e8353ee8e8b0409bd3520aa85024e9fc6e928d1fa6bdff5dcc33c7,PodSandboxId:807c3ffb4a233c760da31b8f70bb46db48a1ffb1093aa501e3c00df80f47c226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462374066223307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83815877faed962ab68b30e231705593eb081f15fd528a5f996216830ca48a47,PodSandboxId:a2b23a65cb1b97f649d7eefd2f6b78e7a158602f18fe90619c4a563a979d27a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374096673688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2825128f60ed5a4a8a6b2e596a995b953f46aabfd7c61b244b774dfdca2ad37b,PodSandboxId:273cb3712f326d81481e696de91fec0faada3dbfec55ee7eee6bfbdc74debd01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374076768402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c
4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b328d54d7eb3722d57a33d292bcbcc31ee1af6d1b487a44fa540b1931d2b8,PodSandboxId:c4c2844f6ad8993188de8492efd7e818336693436a07b0d5ff6e9c41332ad4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462370197237194,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62d411f55ab38d9bf816369e92861ccd13673ff16c0a72c8077fc1c4b120453,PodSandboxId:2ba7c2dcec748f1980469668d3b015c08cf29b85be74d2b3c2ddc2d3ead065d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462370222440812,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d6edc83afddc5c6b8ce6e44f036eb49fdaa4e8ce06b739b2a18f20ee1a6ffc,PodSandboxId:92977a7bc7e888ad02ff6f959592ab496a48896e20d93b38acc8c24bd02e449f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462370208860052,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3feb6a79a3350f141a40e0475c6a639c2fbd467b56104e696bd374449e32547,PodSandboxId:34525136e319d3d7fda2ed1b430e5fa802147de75331e500a9ef5f283351c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462370181406697,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727462367176296629,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548,PodSandboxId:622817cb361e116ae72356f438ef4de9c6266dc3385e71aa8736abed69662068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462363116138727,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1,PodSandboxId:c6b1a2bad1f5995428a2bf87c9d8661db282fb753329c59ac8dac2e6bb4f7f62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462364053542371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a,PodSandboxId:7f1d83a766fc1b3bf9ddd8e91690403e2a181d7b5588c5afbfa80ef74d826521,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462363989484450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2,PodSandboxId:fe26e10e875de18d62fbd6252bd0c9c3ea3c6152f39030bb3080f1a427ed4422,Metadata:&ContainerMetadata{
Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462363268390092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1,PodSandboxId:c406aba7c6c9caffa1f11d42e5ca122d51bd2e4673c1f736eb1db05f41f3228f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462363285651561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352,PodSandboxId:3151b9fd4c7d52438545fb0f12582c7df6f11e85bafd778f5763725b1fa32c99,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462362923927702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5,PodSandboxId:42ee4cff018d84e9ded2ef2db042d87d4d2dbab84d591e4442d197df5f76785e,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462362711763921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c4e7a4e-4a31-49df-98a4-3fb31278e02d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.971679290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91023e6f-8960-4cb2-a91c-a83e337119db name=/runtime.v1.RuntimeService/Version
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.971815595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91023e6f-8960-4cb2-a91c-a83e337119db name=/runtime.v1.RuntimeService/Version
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.973046332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22f17e08-b86b-47a4-847e-d9737f04833b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.973420998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462377973399387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22f17e08-b86b-47a4-847e-d9737f04833b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.974054125Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5b4e426-e081-4f16-a0ef-ad2ff97a16d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.974128499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5b4e426-e081-4f16-a0ef-ad2ff97a16d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:37 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:37.974491055Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63c992ccc7bba545584634e00ed0d0698b75c7af484db9be2992faaceb62a2f1,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727462374087013612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2e2289b3e8353ee8e8b0409bd3520aa85024e9fc6e928d1fa6bdff5dcc33c7,PodSandboxId:807c3ffb4a233c760da31b8f70bb46db48a1ffb1093aa501e3c00df80f47c226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462374066223307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83815877faed962ab68b30e231705593eb081f15fd528a5f996216830ca48a47,PodSandboxId:a2b23a65cb1b97f649d7eefd2f6b78e7a158602f18fe90619c4a563a979d27a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374096673688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2825128f60ed5a4a8a6b2e596a995b953f46aabfd7c61b244b774dfdca2ad37b,PodSandboxId:273cb3712f326d81481e696de91fec0faada3dbfec55ee7eee6bfbdc74debd01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374076768402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c
4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b328d54d7eb3722d57a33d292bcbcc31ee1af6d1b487a44fa540b1931d2b8,PodSandboxId:c4c2844f6ad8993188de8492efd7e818336693436a07b0d5ff6e9c41332ad4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462370197237194,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62d411f55ab38d9bf816369e92861ccd13673ff16c0a72c8077fc1c4b120453,PodSandboxId:2ba7c2dcec748f1980469668d3b015c08cf29b85be74d2b3c2ddc2d3ead065d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462370222440812,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d6edc83afddc5c6b8ce6e44f036eb49fdaa4e8ce06b739b2a18f20ee1a6ffc,PodSandboxId:92977a7bc7e888ad02ff6f959592ab496a48896e20d93b38acc8c24bd02e449f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462370208860052,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3feb6a79a3350f141a40e0475c6a639c2fbd467b56104e696bd374449e32547,PodSandboxId:34525136e319d3d7fda2ed1b430e5fa802147de75331e500a9ef5f283351c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462370181406697,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727462367176296629,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548,PodSandboxId:622817cb361e116ae72356f438ef4de9c6266dc3385e71aa8736abed69662068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462363116138727,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1,PodSandboxId:c6b1a2bad1f5995428a2bf87c9d8661db282fb753329c59ac8dac2e6bb4f7f62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462364053542371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a,PodSandboxId:7f1d83a766fc1b3bf9ddd8e91690403e2a181d7b5588c5afbfa80ef74d826521,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462363989484450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2,PodSandboxId:fe26e10e875de18d62fbd6252bd0c9c3ea3c6152f39030bb3080f1a427ed4422,Metadata:&ContainerMetadata{
Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462363268390092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1,PodSandboxId:c406aba7c6c9caffa1f11d42e5ca122d51bd2e4673c1f736eb1db05f41f3228f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462363285651561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352,PodSandboxId:3151b9fd4c7d52438545fb0f12582c7df6f11e85bafd778f5763725b1fa32c99,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462362923927702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5,PodSandboxId:42ee4cff018d84e9ded2ef2db042d87d4d2dbab84d591e4442d197df5f76785e,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462362711763921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5b4e426-e081-4f16-a0ef-ad2ff97a16d5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.009665858Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b945cda7-2abf-44af-b63f-6e8df647ca1a name=/runtime.v1.RuntimeService/Version
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.009810295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b945cda7-2abf-44af-b63f-6e8df647ca1a name=/runtime.v1.RuntimeService/Version
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.011194819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d4f2729-75df-4c9f-a277-05004cbc2899 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.011659261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462378011620227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d4f2729-75df-4c9f-a277-05004cbc2899 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.012395449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31367843-df5a-4ed8-bab6-1d87c56ee6e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.012491078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31367843-df5a-4ed8-bab6-1d87c56ee6e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:39:38 kubernetes-upgrade-477684 crio[3073]: time="2024-09-27 18:39:38.012994961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:63c992ccc7bba545584634e00ed0d0698b75c7af484db9be2992faaceb62a2f1,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1727462374087013612,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e2e2289b3e8353ee8e8b0409bd3520aa85024e9fc6e928d1fa6bdff5dcc33c7,PodSandboxId:807c3ffb4a233c760da31b8f70bb46db48a1ffb1093aa501e3c00df80f47c226,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462374066223307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83815877faed962ab68b30e231705593eb081f15fd528a5f996216830ca48a47,PodSandboxId:a2b23a65cb1b97f649d7eefd2f6b78e7a158602f18fe90619c4a563a979d27a8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374096673688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2825128f60ed5a4a8a6b2e596a995b953f46aabfd7c61b244b774dfdca2ad37b,PodSandboxId:273cb3712f326d81481e696de91fec0faada3dbfec55ee7eee6bfbdc74debd01,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462374076768402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c
4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f8b328d54d7eb3722d57a33d292bcbcc31ee1af6d1b487a44fa540b1931d2b8,PodSandboxId:c4c2844f6ad8993188de8492efd7e818336693436a07b0d5ff6e9c41332ad4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462370197237194,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62d411f55ab38d9bf816369e92861ccd13673ff16c0a72c8077fc1c4b120453,PodSandboxId:2ba7c2dcec748f1980469668d3b015c08cf29b85be74d2b3c2ddc2d3ead065d7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462370222440812,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d6edc83afddc5c6b8ce6e44f036eb49fdaa4e8ce06b739b2a18f20ee1a6ffc,PodSandboxId:92977a7bc7e888ad02ff6f959592ab496a48896e20d93b38acc8c24bd02e449f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462370208860052,Labels:map[st
ring]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3feb6a79a3350f141a40e0475c6a639c2fbd467b56104e696bd374449e32547,PodSandboxId:34525136e319d3d7fda2ed1b430e5fa802147de75331e500a9ef5f283351c26e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462370181406697,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47,PodSandboxId:da7c55db999a8ffec6137090ae24cdfcae9ab655d206938443a0e14239c57840,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1727462367176296629,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b735f47-d670-4415-b040-37fa30dfc415,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548,PodSandboxId:622817cb361e116ae72356f438ef4de9c6266dc3385e71aa8736abed69662068,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462363116138727,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76w2d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8deb7f8-82ab-4064-9f9b-19d7e2eb1884,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1,PodSandboxId:c6b1a2bad1f5995428a2bf87c9d8661db282fb753329c59ac8dac2e6bb4f7f62,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462364053542371,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-7c65d6cfc9-dkpwl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83b9300d-d68b-4c85-941b-0fdbd39d4ccf,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a,PodSandboxId:7f1d83a766fc1b3bf9ddd8e91690403e2a181d7b5588c5afbfa80ef74d826521,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6
9fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462363989484450,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tljn5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 096154c3-c5ec-441c-b1bd-5c4fb95f211c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2,PodSandboxId:fe26e10e875de18d62fbd6252bd0c9c3ea3c6152f39030bb3080f1a427ed4422,Metadata:&ContainerMetadata{
Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462363268390092,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65f7289bc4d268407b252bcf7901d567,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1,PodSandboxId:c406aba7c6c9caffa1f11d42e5ca122d51bd2e4673c1f736eb1db05f41f3228f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462363285651561,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1c7ac2259f9a261bb6d913d95acba2a,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352,PodSandboxId:3151b9fd4c7d52438545fb0f12582c7df6f11e85bafd778f5763725b1fa32c99,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462362923927702,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68df21b27abba208f190d6ffbb0fc52e,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5,PodSandboxId:42ee4cff018d84e9ded2ef2db042d87d4d2dbab84d591e4442d197df5f76785e,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462362711763921,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-477684,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19384059162ea6d3bb38cb3aac20162d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31367843-df5a-4ed8-bab6-1d87c56ee6e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	83815877faed9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   a2b23a65cb1b9       coredns-7c65d6cfc9-dkpwl
	63c992ccc7bba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   da7c55db999a8       storage-provisioner
	2825128f60ed5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   273cb3712f326       coredns-7c65d6cfc9-tljn5
	8e2e2289b3e83       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   4 seconds ago       Running             kube-proxy                2                   807c3ffb4a233       kube-proxy-76w2d
	b62d411f55ab3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   2ba7c2dcec748       kube-controller-manager-kubernetes-upgrade-477684
	69d6edc83afdd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   92977a7bc7e88       kube-scheduler-kubernetes-upgrade-477684
	5f8b328d54d7e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   c4c2844f6ad89       etcd-kubernetes-upgrade-477684
	b3feb6a79a335       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   34525136e319d       kube-apiserver-kubernetes-upgrade-477684
	3050d7fde1ed2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 seconds ago      Exited              storage-provisioner       2                   da7c55db999a8       storage-provisioner
	fbe9ce203428b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago      Exited              coredns                   1                   c6b1a2bad1f59       coredns-7c65d6cfc9-dkpwl
	8232d398f1322       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 seconds ago      Exited              coredns                   1                   7f1d83a766fc1       coredns-7c65d6cfc9-tljn5
	cc8aeba78a76f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 seconds ago      Exited              kube-scheduler            1                   c406aba7c6c9c       kube-scheduler-kubernetes-upgrade-477684
	39712acad361f       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 seconds ago      Exited              kube-apiserver            1                   fe26e10e875de       kube-apiserver-kubernetes-upgrade-477684
	d13dd0f5f406b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 seconds ago      Exited              kube-proxy                1                   622817cb361e1       kube-proxy-76w2d
	f853670585f66       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   15 seconds ago      Exited              kube-controller-manager   1                   3151b9fd4c7d5       kube-controller-manager-kubernetes-upgrade-477684
	46566a861e81b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 seconds ago      Exited              etcd                      1                   42ee4cff018d8       etcd-kubernetes-upgrade-477684
	
	
	==> coredns [2825128f60ed5a4a8a6b2e596a995b953f46aabfd7c61b244b774dfdca2ad37b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a] <==
	
	
	==> coredns [83815877faed962ab68b30e231705593eb081f15fd528a5f996216830ca48a47] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-477684
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-477684
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:38:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-477684
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:39:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:39:33 +0000   Fri, 27 Sep 2024 18:38:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:39:33 +0000   Fri, 27 Sep 2024 18:38:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:39:33 +0000   Fri, 27 Sep 2024 18:38:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:39:33 +0000   Fri, 27 Sep 2024 18:38:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    kubernetes-upgrade-477684
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c7fa176ccfb42fd93bc0cda3a905d18
	  System UUID:                4c7fa176-ccfb-42fd-93bc-0cda3a905d18
	  Boot ID:                    ca053d67-0e13-42b9-838f-e1e68cb3235f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dkpwl                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     58s
	  kube-system                 coredns-7c65d6cfc9-tljn5                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     58s
	  kube-system                 etcd-kubernetes-upgrade-477684                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         62s
	  kube-system                 kube-apiserver-kubernetes-upgrade-477684             250m (12%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-477684    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-76w2d                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-kubernetes-upgrade-477684             100m (5%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    70s (x8 over 72s)  kubelet          Node kubernetes-upgrade-477684 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x7 over 72s)  kubelet          Node kubernetes-upgrade-477684 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  70s (x8 over 72s)  kubelet          Node kubernetes-upgrade-477684 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           59s                node-controller  Node kubernetes-upgrade-477684 event: Registered Node kubernetes-upgrade-477684 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-477684 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-477684 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-477684 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-477684 event: Registered Node kubernetes-upgrade-477684 in Controller
	
	
	==> dmesg <==
	[  +1.571425] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.112925] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.059373] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.076975] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.221363] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.150761] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.305879] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +4.242839] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +0.059883] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835054] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[ +10.477226] systemd-fstab-generator[1208]: Ignoring "noauto" option for root device
	[  +0.098714] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.886680] kauditd_printk_skb: 107 callbacks suppressed
	[Sep27 18:39] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.342398] systemd-fstab-generator[2376]: Ignoring "noauto" option for root device
	[  +0.723017] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.284868] systemd-fstab-generator[2747]: Ignoring "noauto" option for root device
	[  +0.880567] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +2.582052] kauditd_printk_skb: 233 callbacks suppressed
	[  +0.134426] systemd-fstab-generator[3649]: Ignoring "noauto" option for root device
	[  +2.374432] systemd-fstab-generator[4078]: Ignoring "noauto" option for root device
	[  +4.752641] kauditd_printk_skb: 111 callbacks suppressed
	[  +1.574728] systemd-fstab-generator[4582]: Ignoring "noauto" option for root device
	
	
	==> etcd [46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5] <==
	{"level":"info","ts":"2024-09-27T18:39:24.172186Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-27T18:39:24.271151Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","commit-index":419}
	{"level":"info","ts":"2024-09-27T18:39:24.275818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-27T18:39:24.276143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became follower at term 2"}
	{"level":"info","ts":"2024-09-27T18:39:24.276178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft e5487579cc149d4d [peers: [], term: 2, commit: 419, applied: 0, lastindex: 419, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-27T18:39:24.281953Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-27T18:39:24.311917Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":405}
	{"level":"info","ts":"2024-09-27T18:39:24.384900Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-27T18:39:24.410294Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"e5487579cc149d4d","timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:39:24.426259Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"e5487579cc149d4d"}
	{"level":"info","ts":"2024-09-27T18:39:24.430405Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"e5487579cc149d4d","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-27T18:39:24.431137Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:39:24.439994Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T18:39:24.440212Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:39:24.440253Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:39:24.440266Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:39:24.440624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d switched to configuration voters=(16521584398984060237)"}
	{"level":"info","ts":"2024-09-27T18:39:24.451947Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T18:39:24.472762Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","added-peer-id":"e5487579cc149d4d","added-peer-peer-urls":["https://192.168.50.36:2380"]}
	{"level":"info","ts":"2024-09-27T18:39:24.508863Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:39:24.508918Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:39:24.473057Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2024-09-27T18:39:24.510673Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2024-09-27T18:39:24.526439Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e5487579cc149d4d","initial-advertise-peer-urls":["https://192.168.50.36:2380"],"listen-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:39:24.526511Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [5f8b328d54d7eb3722d57a33d292bcbcc31ee1af6d1b487a44fa540b1931d2b8] <==
	{"level":"info","ts":"2024-09-27T18:39:30.785798Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:39:30.782188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d switched to configuration voters=(16521584398984060237)"}
	{"level":"info","ts":"2024-09-27T18:39:30.786124Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","added-peer-id":"e5487579cc149d4d","added-peer-peer-urls":["https://192.168.50.36:2380"]}
	{"level":"info","ts":"2024-09-27T18:39:30.786322Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"31bd1a1c1ff06930","local-member-id":"e5487579cc149d4d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:39:30.786421Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:39:30.784240Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"e5487579cc149d4d","initial-advertise-peer-urls":["https://192.168.50.36:2380"],"listen-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.36:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:39:30.784322Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2024-09-27T18:39:30.795861Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2024-09-27T18:39:30.784258Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:39:31.808442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T18:39:31.808600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T18:39:31.808656Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d received MsgPreVoteResp from e5487579cc149d4d at term 2"}
	{"level":"info","ts":"2024-09-27T18:39:31.808709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T18:39:31.808824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d received MsgVoteResp from e5487579cc149d4d at term 3"}
	{"level":"info","ts":"2024-09-27T18:39:31.808864Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e5487579cc149d4d became leader at term 3"}
	{"level":"info","ts":"2024-09-27T18:39:31.808898Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e5487579cc149d4d elected leader e5487579cc149d4d at term 3"}
	{"level":"info","ts":"2024-09-27T18:39:31.837226Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"e5487579cc149d4d","local-member-attributes":"{Name:kubernetes-upgrade-477684 ClientURLs:[https://192.168.50.36:2379]}","request-path":"/0/members/e5487579cc149d4d/attributes","cluster-id":"31bd1a1c1ff06930","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:39:31.837703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:39:31.837915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:39:31.838798Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T18:39:31.838972Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T18:39:31.839122Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:39:31.840073Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.36:2379"}
	{"level":"info","ts":"2024-09-27T18:39:31.840903Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:39:31.843155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:39:38 up 1 min,  0 users,  load average: 2.72, 0.69, 0.23
	Linux kubernetes-upgrade-477684 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2] <==
	I0927 18:39:24.116151       1 options.go:228] external host was not specified, using 192.168.50.36
	I0927 18:39:24.201899       1 server.go:142] Version: v1.31.1
	I0927 18:39:24.201970       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [b3feb6a79a3350f141a40e0475c6a639c2fbd467b56104e696bd374449e32547] <==
	I0927 18:39:33.591506       1 policy_source.go:224] refreshing policies
	I0927 18:39:33.597836       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 18:39:33.597924       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 18:39:33.598400       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 18:39:33.599415       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0927 18:39:33.603508       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 18:39:33.607043       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 18:39:33.617461       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 18:39:33.617828       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 18:39:33.622680       1 aggregator.go:171] initial CRD sync complete...
	I0927 18:39:33.622837       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 18:39:33.622905       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 18:39:33.622960       1 cache.go:39] Caches are synced for autoregister controller
	I0927 18:39:33.634695       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 18:39:33.638686       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0927 18:39:33.655837       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 18:39:33.661484       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0927 18:39:34.429097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 18:39:35.494463       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:39:35.512654       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:39:35.572066       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:39:35.671817       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:39:35.686087       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:39:36.698525       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 18:39:37.196683       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b62d411f55ab38d9bf816369e92861ccd13673ff16c0a72c8077fc1c4b120453] <==
	I0927 18:39:36.887665       1 shared_informer.go:320] Caches are synced for GC
	I0927 18:39:36.887758       1 shared_informer.go:320] Caches are synced for stateful set
	I0927 18:39:36.887909       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0927 18:39:36.892277       1 shared_informer.go:320] Caches are synced for job
	I0927 18:39:36.893458       1 shared_informer.go:320] Caches are synced for node
	I0927 18:39:36.894200       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0927 18:39:36.894391       1 shared_informer.go:320] Caches are synced for taint
	I0927 18:39:36.894481       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0927 18:39:36.894510       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0927 18:39:36.894518       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0927 18:39:36.894612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-477684"
	I0927 18:39:36.894809       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0927 18:39:36.895119       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-477684"
	I0927 18:39:36.895244       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0927 18:39:36.902777       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0927 18:39:36.904098       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0927 18:39:36.906290       1 shared_informer.go:320] Caches are synced for ephemeral
	I0927 18:39:36.956136       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 18:39:37.003600       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 18:39:37.037033       1 shared_informer.go:320] Caches are synced for persistent volume
	I0927 18:39:37.058324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="216.675274ms"
	I0927 18:39:37.058748       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.412µs"
	I0927 18:39:37.508401       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 18:39:37.513710       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 18:39:37.515788       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352] <==
	
	
	==> kube-proxy [8e2e2289b3e8353ee8e8b0409bd3520aa85024e9fc6e928d1fa6bdff5dcc33c7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:39:34.641695       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:39:34.665370       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.36"]
	E0927 18:39:34.665704       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:39:34.718659       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:39:34.718701       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:39:34.718786       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:39:34.722050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:39:34.722477       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:39:34.722702       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:39:34.725313       1 config.go:199] "Starting service config controller"
	I0927 18:39:34.725405       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:39:34.725477       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:39:34.725509       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:39:34.726120       1 config.go:328] "Starting node config controller"
	I0927 18:39:34.726172       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:39:34.827224       1 shared_informer.go:320] Caches are synced for node config
	I0927 18:39:34.827341       1 shared_informer.go:320] Caches are synced for service config
	I0927 18:39:34.827376       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548] <==
	
	
	==> kube-scheduler [69d6edc83afddc5c6b8ce6e44f036eb49fdaa4e8ce06b739b2a18f20ee1a6ffc] <==
	I0927 18:39:31.288116       1 serving.go:386] Generated self-signed cert in-memory
	W0927 18:39:33.522302       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:39:33.522492       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:39:33.522534       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:39:33.522575       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:39:33.576032       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 18:39:33.577819       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:39:33.582566       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 18:39:33.583767       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 18:39:33.585824       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:39:33.586071       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 18:39:33.687032       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1] <==
	
	
	==> kubelet <==
	Sep 27 18:39:29 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:29.956109    4085 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68df21b27abba208f190d6ffbb0fc52e-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-477684\" (UID: \"68df21b27abba208f190d6ffbb0fc52e\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-477684"
	Sep 27 18:39:29 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:29.956125    4085 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1c7ac2259f9a261bb6d913d95acba2a-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-477684\" (UID: \"a1c7ac2259f9a261bb6d913d95acba2a\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-477684"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:30.149151    4085 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-477684"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: E0927 18:39:30.150189    4085 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.36:8443: connect: connection refused" node="kubernetes-upgrade-477684"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:30.157544    4085 scope.go:117] "RemoveContainer" containerID="39712acad361f44c0bb0791f4e78fdb1d7a69deae048955738766449c4dba8a2"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:30.160141    4085 scope.go:117] "RemoveContainer" containerID="f853670585f6636c0c8c083e0e04d10a2ca76027fd041d5d36b4f87a54b53352"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:30.160193    4085 scope.go:117] "RemoveContainer" containerID="46566a861e81ba736c61bbf2b068a1f4e4e03616f1fc06371894cdb487b919d5"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:30.172313    4085 scope.go:117] "RemoveContainer" containerID="cc8aeba78a76f45f0e94fc86dbe6f79d0da102ed98c4130ab6d18ace798015a1"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: E0927 18:39:30.347546    4085 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-477684?timeout=10s\": dial tcp 192.168.50.36:8443: connect: connection refused" interval="800ms"
	Sep 27 18:39:30 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:30.551864    4085 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-477684"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.681246    4085 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-477684"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.681510    4085 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-477684"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.681569    4085 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.683629    4085 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.731288    4085 apiserver.go:52] "Watching apiserver"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.751839    4085 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.784300    4085 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4b735f47-d670-4415-b040-37fa30dfc415-tmp\") pod \"storage-provisioner\" (UID: \"4b735f47-d670-4415-b040-37fa30dfc415\") " pod="kube-system/storage-provisioner"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.784416    4085 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8deb7f8-82ab-4064-9f9b-19d7e2eb1884-xtables-lock\") pod \"kube-proxy-76w2d\" (UID: \"e8deb7f8-82ab-4064-9f9b-19d7e2eb1884\") " pod="kube-system/kube-proxy-76w2d"
	Sep 27 18:39:33 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:33.784495    4085 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8deb7f8-82ab-4064-9f9b-19d7e2eb1884-lib-modules\") pod \"kube-proxy-76w2d\" (UID: \"e8deb7f8-82ab-4064-9f9b-19d7e2eb1884\") " pod="kube-system/kube-proxy-76w2d"
	Sep 27 18:39:34 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:34.039657    4085 scope.go:117] "RemoveContainer" containerID="d13dd0f5f406b420a0f5b9e3e5d09494a783187889357207e54f032ed78f6548"
	Sep 27 18:39:34 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:34.040014    4085 scope.go:117] "RemoveContainer" containerID="3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47"
	Sep 27 18:39:34 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:34.041105    4085 scope.go:117] "RemoveContainer" containerID="8232d398f13225002d11a02d27affb9e0bedd57769d8c3372c5285793faf888a"
	Sep 27 18:39:34 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:34.041393    4085 scope.go:117] "RemoveContainer" containerID="fbe9ce203428b56a78fc951c34bd118c6a29592b17ee3364f73d4e9e109d62a1"
	Sep 27 18:39:36 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:36.662591    4085 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 27 18:39:36 kubernetes-upgrade-477684 kubelet[4085]: I0927 18:39:36.740159    4085 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [3050d7fde1ed2dd0d589605e5eef8a9e3e92551c404c40ed5b3a3a494e803c47] <==
	I0927 18:39:27.448175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0927 18:39:27.449685       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [63c992ccc7bba545584634e00ed0d0698b75c7af484db9be2992faaceb62a2f1] <==
	I0927 18:39:34.526321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 18:39:34.555873       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 18:39:34.555941       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0927 18:39:37.364849   68246 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19712-11184/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-477684 -n kubernetes-upgrade-477684
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-477684 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-477684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-477684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-477684: (1.149283879s)
--- FAIL: TestKubernetesUpgrade (385.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-670363 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-670363 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.156024145s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-670363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-670363" primary control-plane node in "pause-670363" cluster
	* Updating the running kvm2 "pause-670363" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-670363" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:37:41.043527   65407 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:37:41.043783   65407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:41.043793   65407 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:41.043798   65407 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:41.043982   65407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:37:41.044524   65407 out.go:352] Setting JSON to false
	I0927 18:37:41.045577   65407 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8406,"bootTime":1727453855,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:37:41.045679   65407 start.go:139] virtualization: kvm guest
	I0927 18:37:41.048004   65407 out.go:177] * [pause-670363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:37:41.049626   65407 notify.go:220] Checking for updates...
	I0927 18:37:41.049637   65407 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:37:41.051405   65407 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:37:41.053129   65407 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:37:41.054924   65407 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:37:41.056599   65407 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:37:41.058081   65407 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:37:41.060130   65407 config.go:182] Loaded profile config "pause-670363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:37:41.060637   65407 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:41.060700   65407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:41.076613   65407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0927 18:37:41.077104   65407 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:41.077766   65407 main.go:141] libmachine: Using API Version  1
	I0927 18:37:41.077784   65407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:41.078093   65407 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:41.078261   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:41.078506   65407 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:37:41.078848   65407 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:41.078883   65407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:41.094350   65407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43249
	I0927 18:37:41.094801   65407 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:41.095279   65407 main.go:141] libmachine: Using API Version  1
	I0927 18:37:41.095301   65407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:41.095714   65407 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:41.095921   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:41.133643   65407 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:37:41.135465   65407 start.go:297] selected driver: kvm2
	I0927 18:37:41.135487   65407 start.go:901] validating driver "kvm2" against &{Name:pause-670363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-670363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:41.135638   65407 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:37:41.135958   65407 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:41.136055   65407 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:37:41.152330   65407 install.go:137] /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:37:41.153357   65407 cni.go:84] Creating CNI manager for ""
	I0927 18:37:41.153425   65407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:37:41.153515   65407 start.go:340] cluster config:
	{Name:pause-670363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-670363 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:41.153716   65407 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:41.420175   65407 out.go:177] * Starting "pause-670363" primary control-plane node in "pause-670363" cluster
	I0927 18:37:41.556810   65407 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:37:41.556895   65407 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:37:41.556904   65407 cache.go:56] Caching tarball of preloaded images
	I0927 18:37:41.557016   65407 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:37:41.557033   65407 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:37:41.557214   65407 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/config.json ...
	I0927 18:37:41.650092   65407 start.go:360] acquireMachinesLock for pause-670363: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:37:41.650233   65407 start.go:364] duration metric: took 104.14µs to acquireMachinesLock for "pause-670363"
	I0927 18:37:41.650261   65407 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:37:41.650272   65407 fix.go:54] fixHost starting: 
	I0927 18:37:41.650692   65407 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:41.650739   65407 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:41.669261   65407 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44927
	I0927 18:37:41.669787   65407 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:41.670335   65407 main.go:141] libmachine: Using API Version  1
	I0927 18:37:41.670360   65407 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:41.670668   65407 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:41.670875   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:41.671054   65407 main.go:141] libmachine: (pause-670363) Calling .GetState
	I0927 18:37:41.672800   65407 fix.go:112] recreateIfNeeded on pause-670363: state=Running err=<nil>
	W0927 18:37:41.672837   65407 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:37:41.822005   65407 out.go:177] * Updating the running kvm2 "pause-670363" VM ...
	I0927 18:37:41.842881   65407 machine.go:93] provisionDockerMachine start ...
	I0927 18:37:41.842919   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:41.843227   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:41.846079   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:41.846535   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:41.846572   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:41.846776   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:41.847009   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:41.847188   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:41.847385   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:41.847640   65407 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:41.847876   65407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0927 18:37:41.847890   65407 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:37:41.967675   65407 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-670363
	
	I0927 18:37:41.967700   65407 main.go:141] libmachine: (pause-670363) Calling .GetMachineName
	I0927 18:37:41.967954   65407 buildroot.go:166] provisioning hostname "pause-670363"
	I0927 18:37:41.967986   65407 main.go:141] libmachine: (pause-670363) Calling .GetMachineName
	I0927 18:37:41.968224   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:41.971242   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:41.971652   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:41.971691   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:41.971809   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:41.972006   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:41.972154   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:41.972269   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:41.972468   65407 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:41.972702   65407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0927 18:37:41.972716   65407 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-670363 && echo "pause-670363" | sudo tee /etc/hostname
	I0927 18:37:42.108605   65407 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-670363
	
	I0927 18:37:42.108634   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:42.111989   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.112379   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:42.112419   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.112607   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:42.112830   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:42.112997   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:42.113173   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:42.113473   65407 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:42.113714   65407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0927 18:37:42.113740   65407 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-670363' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-670363/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-670363' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:37:42.235245   65407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:37:42.235277   65407 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:37:42.235295   65407 buildroot.go:174] setting up certificates
	I0927 18:37:42.235303   65407 provision.go:84] configureAuth start
	I0927 18:37:42.235310   65407 main.go:141] libmachine: (pause-670363) Calling .GetMachineName
	I0927 18:37:42.235624   65407 main.go:141] libmachine: (pause-670363) Calling .GetIP
	I0927 18:37:42.238416   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.238820   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:42.238846   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.238999   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:42.241345   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.241659   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:42.241686   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.241840   65407 provision.go:143] copyHostCerts
	I0927 18:37:42.241885   65407 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:37:42.241893   65407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:37:42.267462   65407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:37:42.267633   65407 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:37:42.267646   65407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:37:42.267682   65407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:37:42.267751   65407 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:37:42.267760   65407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:37:42.267783   65407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:37:42.267842   65407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.pause-670363 san=[127.0.0.1 192.168.61.48 localhost minikube pause-670363]
	I0927 18:37:42.634321   65407 provision.go:177] copyRemoteCerts
	I0927 18:37:42.634389   65407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:37:42.634416   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:42.638162   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.638616   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:42.638670   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.638899   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:42.639142   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:42.639312   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:42.639538   65407 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/pause-670363/id_rsa Username:docker}
	I0927 18:37:42.733500   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:37:42.766772   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0927 18:37:42.795589   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 18:37:42.824372   65407 provision.go:87] duration metric: took 589.053521ms to configureAuth
	I0927 18:37:42.824407   65407 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:37:42.824786   65407 config.go:182] Loaded profile config "pause-670363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:37:42.824891   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:42.828070   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.828471   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:42.828500   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:42.828698   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:42.828909   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:42.829079   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:42.829246   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:42.829423   65407 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:42.829605   65407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0927 18:37:42.829623   65407 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0927 18:37:48.337770   65407 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0927 18:37:48.337801   65407 machine.go:96] duration metric: took 6.494895739s to provisionDockerMachine
	I0927 18:37:48.337822   65407 start.go:293] postStartSetup for "pause-670363" (driver="kvm2")
	I0927 18:37:48.337834   65407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 18:37:48.337851   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:48.338222   65407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 18:37:48.338247   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:48.341430   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.341902   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:48.341934   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.342111   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:48.342350   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:48.342476   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:48.342602   65407 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/pause-670363/id_rsa Username:docker}
	I0927 18:37:48.429204   65407 ssh_runner.go:195] Run: cat /etc/os-release
	I0927 18:37:48.433352   65407 info.go:137] Remote host: Buildroot 2023.02.9
	I0927 18:37:48.433379   65407 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/addons for local assets ...
	I0927 18:37:48.433434   65407 filesync.go:126] Scanning /home/jenkins/minikube-integration/19712-11184/.minikube/files for local assets ...
	I0927 18:37:48.433514   65407 filesync.go:149] local asset: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem -> 183682.pem in /etc/ssl/certs
	I0927 18:37:48.433610   65407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0927 18:37:48.442694   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:37:48.466634   65407 start.go:296] duration metric: took 128.795394ms for postStartSetup
	I0927 18:37:48.466711   65407 fix.go:56] duration metric: took 6.81643826s for fixHost
	I0927 18:37:48.466738   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:48.469496   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.469877   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:48.469910   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.470081   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:48.470270   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:48.470497   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:48.470697   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:48.470869   65407 main.go:141] libmachine: Using SSH client type: native
	I0927 18:37:48.471052   65407 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.48 22 <nil> <nil>}
	I0927 18:37:48.471067   65407 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0927 18:37:48.583286   65407 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727462268.575032477
	
	I0927 18:37:48.583324   65407 fix.go:216] guest clock: 1727462268.575032477
	I0927 18:37:48.583352   65407 fix.go:229] Guest: 2024-09-27 18:37:48.575032477 +0000 UTC Remote: 2024-09-27 18:37:48.466717263 +0000 UTC m=+7.461352087 (delta=108.315214ms)
	I0927 18:37:48.583386   65407 fix.go:200] guest clock delta is within tolerance: 108.315214ms
	I0927 18:37:48.583397   65407 start.go:83] releasing machines lock for "pause-670363", held for 6.933147765s
	I0927 18:37:48.583432   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:48.583726   65407 main.go:141] libmachine: (pause-670363) Calling .GetIP
	I0927 18:37:48.586485   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.586866   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:48.586893   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.587060   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:48.587519   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:48.587680   65407 main.go:141] libmachine: (pause-670363) Calling .DriverName
	I0927 18:37:48.587781   65407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0927 18:37:48.587827   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:48.587923   65407 ssh_runner.go:195] Run: cat /version.json
	I0927 18:37:48.587943   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHHostname
	I0927 18:37:48.590593   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.590900   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.591051   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:48.591079   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.591288   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:48.591354   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:48.591380   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:48.591497   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:48.591604   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHPort
	I0927 18:37:48.591679   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:48.591736   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHKeyPath
	I0927 18:37:48.591792   65407 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/pause-670363/id_rsa Username:docker}
	I0927 18:37:48.591824   65407 main.go:141] libmachine: (pause-670363) Calling .GetSSHUsername
	I0927 18:37:48.591910   65407 sshutil.go:53] new ssh client: &{IP:192.168.61.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/pause-670363/id_rsa Username:docker}
	I0927 18:37:48.705589   65407 ssh_runner.go:195] Run: systemctl --version
	I0927 18:37:48.713075   65407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0927 18:37:48.873896   65407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0927 18:37:48.879620   65407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 18:37:48.879697   65407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 18:37:48.889125   65407 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 18:37:48.889153   65407 start.go:495] detecting cgroup driver to use...
	I0927 18:37:48.889275   65407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0927 18:37:48.905341   65407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0927 18:37:48.919805   65407 docker.go:217] disabling cri-docker service (if available) ...
	I0927 18:37:48.919905   65407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0927 18:37:48.936084   65407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0927 18:37:48.950123   65407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0927 18:37:49.139750   65407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0927 18:37:49.436581   65407 docker.go:233] disabling docker service ...
	I0927 18:37:49.436658   65407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0927 18:37:49.503017   65407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0927 18:37:49.525766   65407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0927 18:37:49.792308   65407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0927 18:37:50.087860   65407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0927 18:37:50.135832   65407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 18:37:50.182913   65407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0927 18:37:50.182984   65407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.196549   65407 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0927 18:37:50.196646   65407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.209932   65407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.225486   65407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.241219   65407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 18:37:50.257953   65407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.271926   65407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.287957   65407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0927 18:37:50.305457   65407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 18:37:50.360782   65407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 18:37:50.395657   65407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:37:50.591690   65407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0927 18:37:51.197743   65407 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0927 18:37:51.197830   65407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0927 18:37:51.203666   65407 start.go:563] Will wait 60s for crictl version
	I0927 18:37:51.203732   65407 ssh_runner.go:195] Run: which crictl
	I0927 18:37:51.207520   65407 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0927 18:37:51.247504   65407 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0927 18:37:51.247615   65407 ssh_runner.go:195] Run: crio --version
	I0927 18:37:51.277215   65407 ssh_runner.go:195] Run: crio --version
	I0927 18:37:51.311405   65407 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0927 18:37:51.312936   65407 main.go:141] libmachine: (pause-670363) Calling .GetIP
	I0927 18:37:51.315612   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:51.315895   65407 main.go:141] libmachine: (pause-670363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:27:4f:a0", ip: ""} in network mk-pause-670363: {Iface:virbr3 ExpiryTime:2024-09-27 19:36:34 +0000 UTC Type:0 Mac:52:54:00:27:4f:a0 Iaid: IPaddr:192.168.61.48 Prefix:24 Hostname:pause-670363 Clientid:01:52:54:00:27:4f:a0}
	I0927 18:37:51.315918   65407 main.go:141] libmachine: (pause-670363) DBG | domain pause-670363 has defined IP address 192.168.61.48 and MAC address 52:54:00:27:4f:a0 in network mk-pause-670363
	I0927 18:37:51.316165   65407 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0927 18:37:51.320675   65407 kubeadm.go:883] updating cluster {Name:pause-670363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-670363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-sec
urity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 18:37:51.320800   65407 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:37:51.320879   65407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:37:51.362416   65407 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:37:51.362439   65407 crio.go:433] Images already preloaded, skipping extraction
	I0927 18:37:51.362491   65407 ssh_runner.go:195] Run: sudo crictl images --output json
	I0927 18:37:51.396489   65407 crio.go:514] all images are preloaded for cri-o runtime.
	I0927 18:37:51.396511   65407 cache_images.go:84] Images are preloaded, skipping loading
	I0927 18:37:51.396518   65407 kubeadm.go:934] updating node { 192.168.61.48 8443 v1.31.1 crio true true} ...
	I0927 18:37:51.396622   65407 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-670363 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.48
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-670363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0927 18:37:51.396692   65407 ssh_runner.go:195] Run: crio config
	I0927 18:37:51.442736   65407 cni.go:84] Creating CNI manager for ""
	I0927 18:37:51.442758   65407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:37:51.442768   65407 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 18:37:51.442787   65407 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-670363 NodeName:pause-670363 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 18:37:51.442919   65407 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-670363"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 18:37:51.442977   65407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 18:37:51.453220   65407 binaries.go:44] Found k8s binaries, skipping transfer
	I0927 18:37:51.453289   65407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 18:37:51.462725   65407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0927 18:37:51.479690   65407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 18:37:51.496748   65407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 18:37:51.513998   65407 ssh_runner.go:195] Run: grep 192.168.61.48	control-plane.minikube.internal$ /etc/hosts
	I0927 18:37:51.517980   65407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:37:51.651745   65407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:37:51.667052   65407 certs.go:68] Setting up /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363 for IP: 192.168.61.48
	I0927 18:37:51.667075   65407 certs.go:194] generating shared ca certs ...
	I0927 18:37:51.667091   65407 certs.go:226] acquiring lock for ca certs: {Name:mkaf4622b37eb514d87bc35054cf668cb0cbcaac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:37:51.667240   65407 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key
	I0927 18:37:51.667274   65407 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key
	I0927 18:37:51.667283   65407 certs.go:256] generating profile certs ...
	I0927 18:37:51.667365   65407 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/client.key
	I0927 18:37:51.667424   65407 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/apiserver.key.9042d4f1
	I0927 18:37:51.667462   65407 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/proxy-client.key
	I0927 18:37:51.667565   65407 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem (1338 bytes)
	W0927 18:37:51.667594   65407 certs.go:480] ignoring /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368_empty.pem, impossibly tiny 0 bytes
	I0927 18:37:51.667602   65407 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 18:37:51.667627   65407 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem (1082 bytes)
	I0927 18:37:51.667649   65407 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem (1123 bytes)
	I0927 18:37:51.667675   65407 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem (1671 bytes)
	I0927 18:37:51.667711   65407 certs.go:484] found cert: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem (1708 bytes)
	I0927 18:37:51.668260   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 18:37:51.692382   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0927 18:37:51.718404   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 18:37:51.744746   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 18:37:51.770826   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0927 18:37:51.805296   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0927 18:37:51.836661   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 18:37:51.862219   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/pause-670363/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0927 18:37:51.886456   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 18:37:51.913035   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/18368.pem --> /usr/share/ca-certificates/18368.pem (1338 bytes)
	I0927 18:37:51.936848   65407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/ssl/certs/183682.pem --> /usr/share/ca-certificates/183682.pem (1708 bytes)
	I0927 18:37:51.961974   65407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 18:37:51.979481   65407 ssh_runner.go:195] Run: openssl version
	I0927 18:37:51.985264   65407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 18:37:51.995528   65407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:37:51.999854   65407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 27 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:37:51.999911   65407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 18:37:52.005267   65407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 18:37:52.015992   65407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18368.pem && ln -fs /usr/share/ca-certificates/18368.pem /etc/ssl/certs/18368.pem"
	I0927 18:37:52.029395   65407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18368.pem
	I0927 18:37:52.034034   65407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 27 17:37 /usr/share/ca-certificates/18368.pem
	I0927 18:37:52.034118   65407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18368.pem
	I0927 18:37:52.039783   65407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18368.pem /etc/ssl/certs/51391683.0"
	I0927 18:37:52.048829   65407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183682.pem && ln -fs /usr/share/ca-certificates/183682.pem /etc/ssl/certs/183682.pem"
	I0927 18:37:52.059650   65407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183682.pem
	I0927 18:37:52.063868   65407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 27 17:37 /usr/share/ca-certificates/183682.pem
	I0927 18:37:52.063921   65407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183682.pem
	I0927 18:37:52.069285   65407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183682.pem /etc/ssl/certs/3ec20f2e.0"
	I0927 18:37:52.080150   65407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 18:37:52.084477   65407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0927 18:37:52.089814   65407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0927 18:37:52.095463   65407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0927 18:37:52.100925   65407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0927 18:37:52.106434   65407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0927 18:37:52.154585   65407 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0927 18:37:52.190302   65407 kubeadm.go:392] StartCluster: {Name:pause-670363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-670363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:52.190451   65407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0927 18:37:52.190567   65407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0927 18:37:52.399891   65407 cri.go:89] found id: "953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb"
	I0927 18:37:52.399918   65407 cri.go:89] found id: "5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0"
	I0927 18:37:52.399922   65407 cri.go:89] found id: "0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9"
	I0927 18:37:52.399925   65407 cri.go:89] found id: "dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d"
	I0927 18:37:52.399928   65407 cri.go:89] found id: "cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1"
	I0927 18:37:52.399932   65407 cri.go:89] found id: "2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596"
	I0927 18:37:52.399934   65407 cri.go:89] found id: "27816f0a27102a8b8988d5381e7d506bf41a42b177c47a224b59e39f1debb58f"
	I0927 18:37:52.399937   65407 cri.go:89] found id: "5caf4bb14dc4690e5e8ae2f90458e4c4a17696daf1330f8263210d6c0ca21e3f"
	I0927 18:37:52.399939   65407 cri.go:89] found id: ""
	I0927 18:37:52.399992   65407 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-670363 -n pause-670363
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-670363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-670363 logs -n 25: (1.358783968s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo docker                         | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo find                           | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo crio                           | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-268892                                     | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	| start   | -p pause-670363 --memory=2048                        | pause-670363              | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:37 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p auto-268892 --memory=3072                         | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:38 UTC |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-904897                            | stopped-upgrade-904897    | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	| start   | -p old-k8s-version-313570                            | old-k8s-version-313570    | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p pause-670363                                      | pause-670363              | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC | 27 Sep 24 18:38 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-477684                         | kubernetes-upgrade-477684 | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC | 27 Sep 24 18:37 UTC |
	| start   | -p kubernetes-upgrade-477684                         | kubernetes-upgrade-477684 | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 pgrep -a                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:37:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:37:58.542765   65578 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:37:58.543099   65578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:58.543115   65578 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:58.543121   65578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:58.543397   65578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:37:58.544174   65578 out.go:352] Setting JSON to false
	I0927 18:37:58.545548   65578 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8424,"bootTime":1727453855,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:37:58.545707   65578 start.go:139] virtualization: kvm guest
	I0927 18:37:58.548251   65578 out.go:177] * [kubernetes-upgrade-477684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:37:58.549949   65578 notify.go:220] Checking for updates...
	I0927 18:37:58.549987   65578 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:37:58.551317   65578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:37:58.552654   65578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:37:58.553991   65578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:37:58.555390   65578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:37:58.556885   65578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:37:58.559430   65578 config.go:182] Loaded profile config "kubernetes-upgrade-477684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 18:37:58.560063   65578 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:58.560119   65578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:58.577597   65578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0927 18:37:58.578081   65578 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:58.578707   65578 main.go:141] libmachine: Using API Version  1
	I0927 18:37:58.578728   65578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:58.579031   65578 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:58.579258   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:37:58.579497   65578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:37:58.579792   65578 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:58.579832   65578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:58.595328   65578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0927 18:37:58.595838   65578 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:58.596463   65578 main.go:141] libmachine: Using API Version  1
	I0927 18:37:58.596500   65578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:58.596887   65578 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:58.597094   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:37:58.636589   65578 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:37:58.637776   65578 start.go:297] selected driver: kvm2
	I0927 18:37:58.637791   65578 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:58.637888   65578 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:37:58.638565   65578 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:58.638633   65578 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:37:58.654455   65578 install.go:137] /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:37:58.654891   65578 cni.go:84] Creating CNI manager for ""
	I0927 18:37:58.654940   65578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:37:58.654969   65578 start.go:340] cluster config:
	{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:58.655076   65578 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:58.657620   65578 out.go:177] * Starting "kubernetes-upgrade-477684" primary control-plane node in "kubernetes-upgrade-477684" cluster
	I0927 18:37:58.658991   65578 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:37:58.659056   65578 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:37:58.659070   65578 cache.go:56] Caching tarball of preloaded images
	I0927 18:37:58.659160   65578 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:37:58.659172   65578 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:37:58.659320   65578 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json ...
	I0927 18:37:58.659550   65578 start.go:360] acquireMachinesLock for kubernetes-upgrade-477684: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:37:58.659605   65578 start.go:364] duration metric: took 32.122µs to acquireMachinesLock for "kubernetes-upgrade-477684"
	I0927 18:37:58.659627   65578 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:37:58.659635   65578 fix.go:54] fixHost starting: 
	I0927 18:37:58.659914   65578 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:58.659958   65578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:58.675550   65578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40531
	I0927 18:37:58.676105   65578 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:58.676606   65578 main.go:141] libmachine: Using API Version  1
	I0927 18:37:58.676630   65578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:58.676963   65578 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:58.677221   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:37:58.677388   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetState
	I0927 18:37:58.679228   65578 fix.go:112] recreateIfNeeded on kubernetes-upgrade-477684: state=Stopped err=<nil>
	I0927 18:37:58.679260   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	W0927 18:37:58.679449   65578 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:37:58.681941   65578 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-477684" ...
	I0927 18:37:57.487811   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 18:37:57.487850   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 18:37:57.487869   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:57.529509   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 18:37:57.529546   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 18:37:57.733938   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:57.739117   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:37:57.739143   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:37:58.233806   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:58.238231   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:37:58.238286   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:37:58.733903   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:58.742587   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:37:58.742624   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:37:59.233095   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:59.237500   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0927 18:37:59.243936   65407 api_server.go:141] control plane version: v1.31.1
	I0927 18:37:59.243962   65407 api_server.go:131] duration metric: took 4.011097668s to wait for apiserver health ...
	I0927 18:37:59.243970   65407 cni.go:84] Creating CNI manager for ""
	I0927 18:37:59.243976   65407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:37:59.246277   65407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 18:37:59.247602   65407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 18:37:59.257746   65407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 18:37:59.275655   65407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:37:59.275739   65407 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 18:37:59.275768   65407 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 18:37:59.284975   65407 system_pods.go:59] 6 kube-system pods found
	I0927 18:37:59.285007   65407 system_pods.go:61] "coredns-7c65d6cfc9-skggj" [791b00fc-3bda-4bf9-a341-8a369bbdcc5d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 18:37:59.285015   65407 system_pods.go:61] "etcd-pause-670363" [242cc019-dd4b-42d0-84d8-2252d59f7ce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 18:37:59.285024   65407 system_pods.go:61] "kube-apiserver-pause-670363" [3efcaba6-afa5-4f82-834e-09922f7dee83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 18:37:59.285034   65407 system_pods.go:61] "kube-controller-manager-pause-670363" [a790a0b0-4114-48f1-82de-6a042d70fb3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 18:37:59.285041   65407 system_pods.go:61] "kube-proxy-hp2m9" [a8ff9fbb-0f43-4bf8-a3e3-315e1a325488] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 18:37:59.285045   65407 system_pods.go:61] "kube-scheduler-pause-670363" [948851c1-8c60-4c30-a079-871360fded9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 18:37:59.285051   65407 system_pods.go:74] duration metric: took 9.3731ms to wait for pod list to return data ...
	I0927 18:37:59.285064   65407 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:37:59.288525   65407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:37:59.288559   65407 node_conditions.go:123] node cpu capacity is 2
	I0927 18:37:59.288574   65407 node_conditions.go:105] duration metric: took 3.504237ms to run NodePressure ...
	I0927 18:37:59.288599   65407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:37:59.563984   65407 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 18:37:59.576010   65407 kubeadm.go:739] kubelet initialised
	I0927 18:37:59.576043   65407 kubeadm.go:740] duration metric: took 12.027246ms waiting for restarted kubelet to initialise ...
	I0927 18:37:59.576053   65407 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:37:59.583294   65407 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:37:56.781933   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:59.281521   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:58.683663   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .Start
	I0927 18:37:58.683975   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring networks are active...
	I0927 18:37:58.685082   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring network default is active
	I0927 18:37:58.685614   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring network mk-kubernetes-upgrade-477684 is active
	I0927 18:37:58.686066   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Getting domain xml...
	I0927 18:37:58.686903   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Creating domain...
	I0927 18:38:00.007379   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Waiting to get IP...
	I0927 18:38:00.008092   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.008535   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.008561   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:00.008498   65612 retry.go:31] will retry after 309.141514ms: waiting for machine to come up
	I0927 18:38:00.318883   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.319394   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.319418   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:00.319361   65612 retry.go:31] will retry after 380.291216ms: waiting for machine to come up
	I0927 18:38:00.701136   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.701634   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.701664   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:00.701597   65612 retry.go:31] will retry after 379.099705ms: waiting for machine to come up
	I0927 18:38:01.082072   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.082534   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.082562   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:01.082486   65612 retry.go:31] will retry after 521.177971ms: waiting for machine to come up
	I0927 18:38:01.605060   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.605650   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.605679   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:01.605599   65612 retry.go:31] will retry after 539.928688ms: waiting for machine to come up
	I0927 18:38:02.147277   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.147791   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.147820   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:02.147737   65612 retry.go:31] will retry after 681.554336ms: waiting for machine to come up
	I0927 18:38:02.830742   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.831215   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.831237   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:02.831197   65612 retry.go:31] will retry after 1.095543572s: waiting for machine to come up
	I0927 18:38:01.590623   65407 pod_ready.go:103] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:03.590922   65407 pod_ready.go:103] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:05.090625   65407 pod_ready.go:93] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:05.090666   65407 pod_ready.go:82] duration metric: took 5.507343687s for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:05.090680   65407 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:01.281665   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:03.281980   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:05.780476   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:03.928005   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:03.928482   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:03.928505   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:03.928442   65612 retry.go:31] will retry after 1.177666447s: waiting for machine to come up
	I0927 18:38:05.107636   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:05.108177   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:05.108198   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:05.108122   65612 retry.go:31] will retry after 1.381904362s: waiting for machine to come up
	I0927 18:38:06.491855   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:06.492292   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:06.492324   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:06.492250   65612 retry.go:31] will retry after 1.812949158s: waiting for machine to come up
	I0927 18:38:08.307099   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:08.307617   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:08.307642   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:08.307574   65612 retry.go:31] will retry after 2.684729478s: waiting for machine to come up
	I0927 18:38:07.780612   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:09.281013   64485 pod_ready.go:93] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.281043   64485 pod_ready.go:82] duration metric: took 38.007168667s for pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.281054   64485 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.282941   64485 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xvfmq" not found
	I0927 18:38:09.282964   64485 pod_ready.go:82] duration metric: took 1.903975ms for pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace to be "Ready" ...
	E0927 18:38:09.282972   64485 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xvfmq" not found
	I0927 18:38:09.282979   64485 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.287595   64485 pod_ready.go:93] pod "etcd-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.287620   64485 pod_ready.go:82] duration metric: took 4.63436ms for pod "etcd-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.287633   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.293137   64485 pod_ready.go:93] pod "kube-apiserver-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.293178   64485 pod_ready.go:82] duration metric: took 5.536975ms for pod "kube-apiserver-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.293195   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.298363   64485 pod_ready.go:93] pod "kube-controller-manager-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.298390   64485 pod_ready.go:82] duration metric: took 5.184224ms for pod "kube-controller-manager-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.298402   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-vpdgz" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.478592   64485 pod_ready.go:93] pod "kube-proxy-vpdgz" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.478629   64485 pod_ready.go:82] duration metric: took 180.218179ms for pod "kube-proxy-vpdgz" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.478668   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.878743   64485 pod_ready.go:93] pod "kube-scheduler-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.878773   64485 pod_ready.go:82] duration metric: took 400.089376ms for pod "kube-scheduler-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.878784   64485 pod_ready.go:39] duration metric: took 38.622865704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:09.878802   64485 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:38:09.878861   64485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:38:09.898126   64485 api_server.go:72] duration metric: took 39.439908312s to wait for apiserver process to appear ...
	I0927 18:38:09.898163   64485 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:38:09.898190   64485 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0927 18:38:09.903799   64485 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0927 18:38:09.905009   64485 api_server.go:141] control plane version: v1.31.1
	I0927 18:38:09.905035   64485 api_server.go:131] duration metric: took 6.864051ms to wait for apiserver health ...
	I0927 18:38:09.905045   64485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:38:10.081255   64485 system_pods.go:59] 7 kube-system pods found
	I0927 18:38:10.081287   64485 system_pods.go:61] "coredns-7c65d6cfc9-pwf7q" [a073af62-e848-4994-a501-b32b28e91435] Running
	I0927 18:38:10.081292   64485 system_pods.go:61] "etcd-auto-268892" [416c5787-5349-4451-8ad7-ee987ee333f7] Running
	I0927 18:38:10.081296   64485 system_pods.go:61] "kube-apiserver-auto-268892" [d51f6369-3be4-4029-91af-3c42b76bdd59] Running
	I0927 18:38:10.081299   64485 system_pods.go:61] "kube-controller-manager-auto-268892" [a40f8fa0-a7f4-44f3-8831-8042d8f0616b] Running
	I0927 18:38:10.081303   64485 system_pods.go:61] "kube-proxy-vpdgz" [30ed7e8e-ac3e-4b16-a5af-db83c746e06b] Running
	I0927 18:38:10.081306   64485 system_pods.go:61] "kube-scheduler-auto-268892" [4744c822-4aa2-4a4e-8d26-7d6cea12845d] Running
	I0927 18:38:10.081309   64485 system_pods.go:61] "storage-provisioner" [defb8de5-b283-4616-bcb3-0f7491746ecf] Running
	I0927 18:38:10.081315   64485 system_pods.go:74] duration metric: took 176.26413ms to wait for pod list to return data ...
	I0927 18:38:10.081321   64485 default_sa.go:34] waiting for default service account to be created ...
	I0927 18:38:10.278309   64485 default_sa.go:45] found service account: "default"
	I0927 18:38:10.278348   64485 default_sa.go:55] duration metric: took 197.021295ms for default service account to be created ...
	I0927 18:38:10.278358   64485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 18:38:10.479910   64485 system_pods.go:86] 7 kube-system pods found
	I0927 18:38:10.479940   64485 system_pods.go:89] "coredns-7c65d6cfc9-pwf7q" [a073af62-e848-4994-a501-b32b28e91435] Running
	I0927 18:38:10.479946   64485 system_pods.go:89] "etcd-auto-268892" [416c5787-5349-4451-8ad7-ee987ee333f7] Running
	I0927 18:38:10.479950   64485 system_pods.go:89] "kube-apiserver-auto-268892" [d51f6369-3be4-4029-91af-3c42b76bdd59] Running
	I0927 18:38:10.479954   64485 system_pods.go:89] "kube-controller-manager-auto-268892" [a40f8fa0-a7f4-44f3-8831-8042d8f0616b] Running
	I0927 18:38:10.479957   64485 system_pods.go:89] "kube-proxy-vpdgz" [30ed7e8e-ac3e-4b16-a5af-db83c746e06b] Running
	I0927 18:38:10.479960   64485 system_pods.go:89] "kube-scheduler-auto-268892" [4744c822-4aa2-4a4e-8d26-7d6cea12845d] Running
	I0927 18:38:10.479963   64485 system_pods.go:89] "storage-provisioner" [defb8de5-b283-4616-bcb3-0f7491746ecf] Running
	I0927 18:38:10.479969   64485 system_pods.go:126] duration metric: took 201.605948ms to wait for k8s-apps to be running ...
	I0927 18:38:10.479976   64485 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 18:38:10.480019   64485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:38:10.496795   64485 system_svc.go:56] duration metric: took 16.808087ms WaitForService to wait for kubelet
	I0927 18:38:10.496826   64485 kubeadm.go:582] duration metric: took 40.038612881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:38:10.496846   64485 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:38:10.679220   64485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:38:10.679262   64485 node_conditions.go:123] node cpu capacity is 2
	I0927 18:38:10.679278   64485 node_conditions.go:105] duration metric: took 182.425523ms to run NodePressure ...
	I0927 18:38:10.679292   64485 start.go:241] waiting for startup goroutines ...
	I0927 18:38:10.679301   64485 start.go:246] waiting for cluster config update ...
	I0927 18:38:10.679314   64485 start.go:255] writing updated cluster config ...
	I0927 18:38:10.679605   64485 ssh_runner.go:195] Run: rm -f paused
	I0927 18:38:10.727421   64485 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 18:38:10.729528   64485 out.go:177] * Done! kubectl is now configured to use "auto-268892" cluster and "default" namespace by default
	I0927 18:38:07.097152   65407 pod_ready.go:103] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:09.097603   65407 pod_ready.go:103] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:10.994197   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:10.994940   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:10.994971   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:10.994893   65612 retry.go:31] will retry after 2.852270096s: waiting for machine to come up
	I0927 18:38:11.101203   65407 pod_ready.go:103] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:13.097874   65407 pod_ready.go:93] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.097898   65407 pod_ready.go:82] duration metric: took 8.007210789s for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.097906   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.103034   65407 pod_ready.go:93] pod "kube-apiserver-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.103060   65407 pod_ready.go:82] duration metric: took 5.146887ms for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.103073   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.610406   65407 pod_ready.go:93] pod "kube-controller-manager-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.610429   65407 pod_ready.go:82] duration metric: took 507.3481ms for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.610450   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.615894   65407 pod_ready.go:93] pod "kube-proxy-hp2m9" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.615917   65407 pod_ready.go:82] duration metric: took 5.459583ms for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.615928   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.621181   65407 pod_ready.go:93] pod "kube-scheduler-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.621206   65407 pod_ready.go:82] duration metric: took 5.271211ms for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.621215   65407 pod_ready.go:39] duration metric: took 14.045152047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:13.621244   65407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 18:38:13.636756   65407 ops.go:34] apiserver oom_adj: -16
	I0927 18:38:13.636784   65407 kubeadm.go:597] duration metric: took 21.056813311s to restartPrimaryControlPlane
	I0927 18:38:13.636795   65407 kubeadm.go:394] duration metric: took 21.446501695s to StartCluster
	I0927 18:38:13.636816   65407 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:13.636906   65407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:38:13.637957   65407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:13.638169   65407 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 18:38:13.638303   65407 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 18:38:13.638521   65407 config.go:182] Loaded profile config "pause-670363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:38:13.639846   65407 out.go:177] * Verifying Kubernetes components...
	I0927 18:38:13.639846   65407 out.go:177] * Enabled addons: 
	I0927 18:38:13.641887   65407 addons.go:510] duration metric: took 3.593351ms for enable addons: enabled=[]
	I0927 18:38:13.641907   65407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:38:13.799774   65407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:38:13.818406   65407 node_ready.go:35] waiting up to 6m0s for node "pause-670363" to be "Ready" ...
	I0927 18:38:13.821992   65407 node_ready.go:49] node "pause-670363" has status "Ready":"True"
	I0927 18:38:13.822023   65407 node_ready.go:38] duration metric: took 3.584287ms for node "pause-670363" to be "Ready" ...
	I0927 18:38:13.822034   65407 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:13.897862   65407 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.295391   65407 pod_ready.go:93] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:14.295416   65407 pod_ready.go:82] duration metric: took 397.530639ms for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.295426   65407 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.694931   65407 pod_ready.go:93] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:14.694962   65407 pod_ready.go:82] duration metric: took 399.52994ms for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.694975   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.095910   65407 pod_ready.go:93] pod "kube-apiserver-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:15.095938   65407 pod_ready.go:82] duration metric: took 400.954032ms for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.095951   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.495173   65407 pod_ready.go:93] pod "kube-controller-manager-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:15.495198   65407 pod_ready.go:82] duration metric: took 399.238887ms for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.495209   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.895665   65407 pod_ready.go:93] pod "kube-proxy-hp2m9" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:15.895704   65407 pod_ready.go:82] duration metric: took 400.486882ms for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.895720   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:16.294938   65407 pod_ready.go:93] pod "kube-scheduler-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:16.294971   65407 pod_ready.go:82] duration metric: took 399.242542ms for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:16.294983   65407 pod_ready.go:39] duration metric: took 2.472936854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:16.295001   65407 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:38:16.295051   65407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:38:16.310200   65407 api_server.go:72] duration metric: took 2.672005153s to wait for apiserver process to appear ...
	I0927 18:38:16.310230   65407 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:38:16.310250   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:38:16.315615   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0927 18:38:16.316400   65407 api_server.go:141] control plane version: v1.31.1
	I0927 18:38:16.316416   65407 api_server.go:131] duration metric: took 6.180781ms to wait for apiserver health ...
	I0927 18:38:16.316424   65407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:38:16.497011   65407 system_pods.go:59] 6 kube-system pods found
	I0927 18:38:16.497038   65407 system_pods.go:61] "coredns-7c65d6cfc9-skggj" [791b00fc-3bda-4bf9-a341-8a369bbdcc5d] Running
	I0927 18:38:16.497043   65407 system_pods.go:61] "etcd-pause-670363" [242cc019-dd4b-42d0-84d8-2252d59f7ce0] Running
	I0927 18:38:16.497047   65407 system_pods.go:61] "kube-apiserver-pause-670363" [3efcaba6-afa5-4f82-834e-09922f7dee83] Running
	I0927 18:38:16.497051   65407 system_pods.go:61] "kube-controller-manager-pause-670363" [a790a0b0-4114-48f1-82de-6a042d70fb3a] Running
	I0927 18:38:16.497054   65407 system_pods.go:61] "kube-proxy-hp2m9" [a8ff9fbb-0f43-4bf8-a3e3-315e1a325488] Running
	I0927 18:38:16.497057   65407 system_pods.go:61] "kube-scheduler-pause-670363" [948851c1-8c60-4c30-a079-871360fded9d] Running
	I0927 18:38:16.497063   65407 system_pods.go:74] duration metric: took 180.633999ms to wait for pod list to return data ...
	I0927 18:38:16.497070   65407 default_sa.go:34] waiting for default service account to be created ...
	I0927 18:38:16.695791   65407 default_sa.go:45] found service account: "default"
	I0927 18:38:16.695814   65407 default_sa.go:55] duration metric: took 198.739486ms for default service account to be created ...
	I0927 18:38:16.695823   65407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 18:38:16.897658   65407 system_pods.go:86] 6 kube-system pods found
	I0927 18:38:16.897686   65407 system_pods.go:89] "coredns-7c65d6cfc9-skggj" [791b00fc-3bda-4bf9-a341-8a369bbdcc5d] Running
	I0927 18:38:16.897692   65407 system_pods.go:89] "etcd-pause-670363" [242cc019-dd4b-42d0-84d8-2252d59f7ce0] Running
	I0927 18:38:16.897696   65407 system_pods.go:89] "kube-apiserver-pause-670363" [3efcaba6-afa5-4f82-834e-09922f7dee83] Running
	I0927 18:38:16.897700   65407 system_pods.go:89] "kube-controller-manager-pause-670363" [a790a0b0-4114-48f1-82de-6a042d70fb3a] Running
	I0927 18:38:16.897703   65407 system_pods.go:89] "kube-proxy-hp2m9" [a8ff9fbb-0f43-4bf8-a3e3-315e1a325488] Running
	I0927 18:38:16.897706   65407 system_pods.go:89] "kube-scheduler-pause-670363" [948851c1-8c60-4c30-a079-871360fded9d] Running
	I0927 18:38:16.897712   65407 system_pods.go:126] duration metric: took 201.884122ms to wait for k8s-apps to be running ...
	I0927 18:38:16.897718   65407 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 18:38:16.897766   65407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:38:16.911909   65407 system_svc.go:56] duration metric: took 14.182843ms WaitForService to wait for kubelet
	I0927 18:38:16.911937   65407 kubeadm.go:582] duration metric: took 3.273747111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:38:16.911955   65407 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:38:17.096056   65407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:38:17.096082   65407 node_conditions.go:123] node cpu capacity is 2
	I0927 18:38:17.096095   65407 node_conditions.go:105] duration metric: took 184.133998ms to run NodePressure ...
	I0927 18:38:17.096108   65407 start.go:241] waiting for startup goroutines ...
	I0927 18:38:17.096116   65407 start.go:246] waiting for cluster config update ...
	I0927 18:38:17.096126   65407 start.go:255] writing updated cluster config ...
	I0927 18:38:17.096430   65407 ssh_runner.go:195] Run: rm -f paused
	I0927 18:38:17.142318   65407 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 18:38:17.144415   65407 out.go:177] * Done! kubectl is now configured to use "pause-670363" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.791949407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462297791927834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d47d2e55-6c36-4df1-bd40-9c91c495b358 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.792562972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74f6206a-30ae-408e-bf9f-104e865037b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.792632351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74f6206a-30ae-408e-bf9f-104e865037b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.792897611Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74f6206a-30ae-408e-bf9f-104e865037b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.832741569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=51dc214c-d466-4c03-9636-f5b58cc2fd08 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.832814501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=51dc214c-d466-4c03-9636-f5b58cc2fd08 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.834010177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5d806c0-725e-4788-a7a9-e83ba2190dd2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.834681614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462297834648669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5d806c0-725e-4788-a7a9-e83ba2190dd2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.835229969Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9da24cff-e4ea-416c-82e0-967553b0d41e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.835313096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9da24cff-e4ea-416c-82e0-967553b0d41e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.835567831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9da24cff-e4ea-416c-82e0-967553b0d41e name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.883314222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e7c4c23-8aaf-46e8-bda8-5ff616e7a06a name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.883391759Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e7c4c23-8aaf-46e8-bda8-5ff616e7a06a name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.884793322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ecbab07-c5aa-4aa3-bcd9-3b0d01881f75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.885296616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462297885240659,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ecbab07-c5aa-4aa3-bcd9-3b0d01881f75 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.885789199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eed2b964-ea9b-490e-a0cc-f270074406e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.885852084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eed2b964-ea9b-490e-a0cc-f270074406e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.886130973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eed2b964-ea9b-490e-a0cc-f270074406e9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.929232909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06767251-f9f9-4a87-a544-200fefce6cc0 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.929365834Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06767251-f9f9-4a87-a544-200fefce6cc0 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.930629009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f80b3abe-87f0-489f-bbe4-924da00e5bd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.931487604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462297931458954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f80b3abe-87f0-489f-bbe4-924da00e5bd1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.931967578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf86ed3d-57b1-464d-ae34-66fadd24c735 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.932021241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf86ed3d-57b1-464d-ae34-66fadd24c735 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:17 pause-670363 crio[2882]: time="2024-09-27 18:38:17.932338215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf86ed3d-57b1-464d-ae34-66fadd24c735 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2c71d5f3f50d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   19 seconds ago      Running             coredns                   2                   aa63a4aeb3e7c       coredns-7c65d6cfc9-skggj
	158b56532d58b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago      Running             kube-proxy                2                   6e17bf7a2dcaa       kube-proxy-hp2m9
	77efc66c2b7d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   23 seconds ago      Running             kube-apiserver            2                   1206c72258a21       kube-apiserver-pause-670363
	a9ddc858a72cf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   23 seconds ago      Running             kube-controller-manager   2                   3457adafd6cfa       kube-controller-manager-pause-670363
	44c1762aae670       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   23 seconds ago      Running             kube-scheduler            2                   6b7506be02cca       kube-scheduler-pause-670363
	5f30f087b7b82       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   7198c268b5c2e       etcd-pause-670363
	953dd63cf444b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   27 seconds ago      Exited              coredns                   1                   04b6569b50341       coredns-7c65d6cfc9-skggj
	5a418e42dbad6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   28 seconds ago      Exited              kube-apiserver            1                   2b26b192fcfb0       kube-apiserver-pause-670363
	0e3f13ae3ea85       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago      Exited              etcd                      1                   f97a33d9f6d75       etcd-pause-670363
	dcd58c9354a7a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   28 seconds ago      Exited              kube-scheduler            1                   45d03377a3031       kube-scheduler-pause-670363
	cc44af5da4832       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   28 seconds ago      Exited              kube-proxy                1                   bcd315f540bca       kube-proxy-hp2m9
	2d0fb0b055331       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   28 seconds ago      Exited              kube-controller-manager   1                   439fef07f579a       kube-controller-manager-pause-670363
	
	
	==> coredns [953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb] <==
	
	
	==> coredns [b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38946 - 28035 "HINFO IN 4201034557618249852.7739285813253681806. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012315836s
	
	
	==> describe nodes <==
	Name:               pause-670363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-670363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=pause-670363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T18_37_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-670363
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:38:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:37:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.48
	  Hostname:    pause-670363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc2f9bb4aff641819f1740156b9a7a17
	  System UUID:                fc2f9bb4-aff6-4181-9f17-40156b9a7a17
	  Boot ID:                    4e86b357-0ea3-4670-a8be-f8f6638c2026
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-skggj                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     73s
	  kube-system                 etcd-pause-670363                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         78s
	  kube-system                 kube-apiserver-pause-670363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-controller-manager-pause-670363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-proxy-hp2m9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-670363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 71s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-670363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-670363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-670363 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  NodeReady                77s                kubelet          Node pause-670363 status is now: NodeReady
	  Normal  RegisteredNode           74s                node-controller  Node pause-670363 event: Registered Node pause-670363 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-670363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-670363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-670363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-670363 event: Registered Node pause-670363 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.990979] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.060751] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053433] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.204507] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.122328] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.290366] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +4.235310] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +3.976728] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.059511] kauditd_printk_skb: 158 callbacks suppressed
	[Sep27 18:37] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.078979] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.309275] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.062282] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.031990] kauditd_printk_skb: 88 callbacks suppressed
	[ +31.087444] systemd-fstab-generator[2241]: Ignoring "noauto" option for root device
	[  +0.246235] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.339189] systemd-fstab-generator[2497]: Ignoring "noauto" option for root device
	[  +0.280809] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.551296] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +1.100075] systemd-fstab-generator[3115]: Ignoring "noauto" option for root device
	[  +2.388716] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.276088] kauditd_printk_skb: 266 callbacks suppressed
	[Sep27 18:38] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.507659] systemd-fstab-generator[3986]: Ignoring "noauto" option for root device
	
	
	==> etcd [0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9] <==
	{"level":"info","ts":"2024-09-27T18:37:50.250673Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-27T18:37:50.286377Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","commit-index":416}
	{"level":"info","ts":"2024-09-27T18:37:50.294161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-27T18:37:50.297381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became follower at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:50.297476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f76d6fbad492a1d6 [peers: [], term: 2, commit: 416, applied: 0, lastindex: 416, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-27T18:37:50.317146Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-27T18:37:50.354574Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":400}
	{"level":"info","ts":"2024-09-27T18:37:50.403415Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-27T18:37:50.416618Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f76d6fbad492a1d6","timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:37:50.417002Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f76d6fbad492a1d6"}
	{"level":"info","ts":"2024-09-27T18:37:50.417045Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"f76d6fbad492a1d6","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-27T18:37:50.456575Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T18:37:50.456798Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:50.456841Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:50.456848Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:50.467956Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:37:50.488628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 switched to configuration voters=(17829029348050641366)"}
	{"level":"info","ts":"2024-09-27T18:37:50.488732Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","added-peer-id":"f76d6fbad492a1d6","added-peer-peer-urls":["https://192.168.61.48:2380"]}
	{"level":"info","ts":"2024-09-27T18:37:50.488857Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:50.488903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:50.527726Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T18:37:50.533505Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f76d6fbad492a1d6","initial-advertise-peer-urls":["https://192.168.61.48:2380"],"listen-peer-urls":["https://192.168.61.48:2380"],"advertise-client-urls":["https://192.168.61.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:37:50.535309Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:37:50.535419Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.48:2380"}
	{"level":"info","ts":"2024-09-27T18:37:50.536692Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.48:2380"}
	
	
	==> etcd [5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603] <==
	{"level":"info","ts":"2024-09-27T18:37:54.913251Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:54.926447Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:54.926483Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:54.915592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 switched to configuration voters=(17829029348050641366)"}
	{"level":"info","ts":"2024-09-27T18:37:54.926677Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","added-peer-id":"f76d6fbad492a1d6","added-peer-peer-urls":["https://192.168.61.48:2380"]}
	{"level":"info","ts":"2024-09-27T18:37:54.926786Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:54.927349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:54.913110Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T18:37:54.923546Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:37:55.879994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:55.880207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:55.880365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 received MsgPreVoteResp from f76d6fbad492a1d6 at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:55.880480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.880551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 received MsgVoteResp from f76d6fbad492a1d6 at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.880612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.880642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f76d6fbad492a1d6 elected leader f76d6fbad492a1d6 at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.883469Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f76d6fbad492a1d6","local-member-attributes":"{Name:pause-670363 ClientURLs:[https://192.168.61.48:2379]}","request-path":"/0/members/f76d6fbad492a1d6/attributes","cluster-id":"6f0fba60f4785994","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:37:55.883664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:37:55.884329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:37:55.885697Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:37:55.887167Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.48:2379"}
	{"level":"info","ts":"2024-09-27T18:37:55.888178Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:37:55.889698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T18:37:55.890309Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T18:37:55.890339Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:38:18 up 1 min,  0 users,  load average: 0.86, 0.33, 0.12
	Linux pause-670363 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0] <==
	I0927 18:37:50.466208       1 options.go:228] external host was not specified, using 192.168.61.48
	I0927 18:37:50.514324       1 server.go:142] Version: v1.31.1
	I0927 18:37:50.514411       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad] <==
	I0927 18:37:57.536581       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 18:37:57.536689       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 18:37:57.537317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 18:37:57.537804       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 18:37:57.544933       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 18:37:57.546188       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 18:37:57.545350       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 18:37:57.560439       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0927 18:37:57.569404       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 18:37:57.577817       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 18:37:57.579130       1 aggregator.go:171] initial CRD sync complete...
	I0927 18:37:57.579202       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 18:37:57.579228       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 18:37:57.579293       1 cache.go:39] Caches are synced for autoregister controller
	I0927 18:37:57.586579       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 18:37:57.586615       1 policy_source.go:224] refreshing policies
	I0927 18:37:57.655371       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 18:37:58.438695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 18:37:59.415934       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:37:59.432382       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:37:59.475578       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:37:59.510623       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:37:59.526019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:38:00.876868       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 18:38:01.169692       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596] <==
	
	
	==> kube-controller-manager [a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120] <==
	I0927 18:38:00.919457       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0927 18:38:00.919463       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0927 18:38:00.919543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-670363"
	I0927 18:38:00.922154       1 shared_informer.go:320] Caches are synced for daemon sets
	I0927 18:38:00.925078       1 shared_informer.go:320] Caches are synced for job
	I0927 18:38:00.925170       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0927 18:38:00.927814       1 shared_informer.go:320] Caches are synced for GC
	I0927 18:38:00.931516       1 shared_informer.go:320] Caches are synced for persistent volume
	I0927 18:38:00.981300       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 18:38:00.992571       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 18:38:01.016327       1 shared_informer.go:320] Caches are synced for disruption
	I0927 18:38:01.017827       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0927 18:38:01.021457       1 shared_informer.go:320] Caches are synced for crt configmap
	I0927 18:38:01.030489       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0927 18:38:01.036338       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0927 18:38:01.036628       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0927 18:38:01.036675       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0927 18:38:01.036747       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0927 18:38:01.133218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="265.721177ms"
	I0927 18:38:01.133514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.334µs"
	I0927 18:38:01.533606       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 18:38:01.567397       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 18:38:01.567544       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 18:38:04.733949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.775218ms"
	I0927 18:38:04.734281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="95.42µs"
	
	
	==> kube-proxy [158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:37:58.759439       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:37:58.776890       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.48"]
	E0927 18:37:58.777464       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:37:58.835439       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:37:58.835486       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:37:58.835515       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:37:58.840454       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:37:58.840751       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:37:58.840793       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:37:58.842588       1 config.go:199] "Starting service config controller"
	I0927 18:37:58.847806       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:37:58.842761       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:37:58.852903       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:37:58.852912       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 18:37:58.844222       1 config.go:328] "Starting node config controller"
	I0927 18:37:58.852942       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:37:58.852946       1 shared_informer.go:320] Caches are synced for node config
	I0927 18:37:58.948626       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1] <==
	
	
	==> kube-scheduler [44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6] <==
	I0927 18:37:56.213241       1 serving.go:386] Generated self-signed cert in-memory
	W0927 18:37:57.510862       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:37:57.511003       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:37:57.511034       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:37:57.511104       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:37:57.569137       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 18:37:57.570010       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:37:57.572242       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 18:37:57.572434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 18:37:57.572536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 18:37:57.577894       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:37:57.678506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d] <==
	
	
	==> kubelet <==
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.383686    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d40994cf409da134ffe3d631f20b6f88-kubeconfig\") pod \"kube-controller-manager-pause-670363\" (UID: \"d40994cf409da134ffe3d631f20b6f88\") " pod="kube-system/kube-controller-manager-pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.383701    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d40994cf409da134ffe3d631f20b6f88-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-670363\" (UID: \"d40994cf409da134ffe3d631f20b6f88\") " pod="kube-system/kube-controller-manager-pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.542897    3556 kubelet_node_status.go:72] "Attempting to register node" node="pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: E0927 18:37:54.543833    3556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.48:8443: connect: connection refused" node="pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.571472    3556 scope.go:117] "RemoveContainer" containerID="dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.573462    3556 scope.go:117] "RemoveContainer" containerID="0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.575792    3556 scope.go:117] "RemoveContainer" containerID="2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.577418    3556 scope.go:117] "RemoveContainer" containerID="5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: E0927 18:37:54.764798    3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-670363?timeout=10s\": dial tcp 192.168.61.48:8443: connect: connection refused" interval="800ms"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.945144    3556 kubelet_node_status.go:72] "Attempting to register node" node="pause-670363"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.648526    3556 kubelet_node_status.go:111] "Node was previously registered" node="pause-670363"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.648642    3556 kubelet_node_status.go:75] "Successfully registered node" node="pause-670363"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.648676    3556 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.650097    3556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.135008    3556 apiserver.go:52] "Watching apiserver"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.160931    3556 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.260703    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8ff9fbb-0f43-4bf8-a3e3-315e1a325488-lib-modules\") pod \"kube-proxy-hp2m9\" (UID: \"a8ff9fbb-0f43-4bf8-a3e3-315e1a325488\") " pod="kube-system/kube-proxy-hp2m9"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.260803    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8ff9fbb-0f43-4bf8-a3e3-315e1a325488-xtables-lock\") pod \"kube-proxy-hp2m9\" (UID: \"a8ff9fbb-0f43-4bf8-a3e3-315e1a325488\") " pod="kube-system/kube-proxy-hp2m9"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.439977    3556 scope.go:117] "RemoveContainer" containerID="953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.441490    3556 scope.go:117] "RemoveContainer" containerID="cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1"
	Sep 27 18:38:04 pause-670363 kubelet[3556]: E0927 18:38:04.243433    3556 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462284243051232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:38:04 pause-670363 kubelet[3556]: E0927 18:38:04.243730    3556 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462284243051232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:38:04 pause-670363 kubelet[3556]: I0927 18:38:04.694121    3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 27 18:38:14 pause-670363 kubelet[3556]: E0927 18:38:14.247570    3556 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462294247019072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:38:14 pause-670363 kubelet[3556]: E0927 18:38:14.247875    3556 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462294247019072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-670363 -n pause-670363
helpers_test.go:261: (dbg) Run:  kubectl --context pause-670363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-670363 -n pause-670363
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-670363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-670363 logs -n 25: (1.395436332s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo docker                         | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo cat                            | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo                                | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo find                           | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-268892 sudo crio                           | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-268892                                     | cilium-268892             | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	| start   | -p pause-670363 --memory=2048                        | pause-670363              | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:37 UTC |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p auto-268892 --memory=3072                         | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:38 UTC |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-904897                            | stopped-upgrade-904897    | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC | 27 Sep 24 18:36 UTC |
	| start   | -p old-k8s-version-313570                            | old-k8s-version-313570    | jenkins | v1.34.0 | 27 Sep 24 18:36 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| start   | -p pause-670363                                      | pause-670363              | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC | 27 Sep 24 18:38 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-477684                         | kubernetes-upgrade-477684 | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC | 27 Sep 24 18:37 UTC |
	| start   | -p kubernetes-upgrade-477684                         | kubernetes-upgrade-477684 | jenkins | v1.34.0 | 27 Sep 24 18:37 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-268892 pgrep -a                              | auto-268892               | jenkins | v1.34.0 | 27 Sep 24 18:38 UTC | 27 Sep 24 18:38 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 18:37:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 18:37:58.542765   65578 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:37:58.543099   65578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:58.543115   65578 out.go:358] Setting ErrFile to fd 2...
	I0927 18:37:58.543121   65578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:37:58.543397   65578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:37:58.544174   65578 out.go:352] Setting JSON to false
	I0927 18:37:58.545548   65578 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8424,"bootTime":1727453855,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 18:37:58.545707   65578 start.go:139] virtualization: kvm guest
	I0927 18:37:58.548251   65578 out.go:177] * [kubernetes-upgrade-477684] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 18:37:58.549949   65578 notify.go:220] Checking for updates...
	I0927 18:37:58.549987   65578 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 18:37:58.551317   65578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 18:37:58.552654   65578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:37:58.553991   65578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 18:37:58.555390   65578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 18:37:58.556885   65578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 18:37:58.559430   65578 config.go:182] Loaded profile config "kubernetes-upgrade-477684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0927 18:37:58.560063   65578 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:58.560119   65578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:58.577597   65578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0927 18:37:58.578081   65578 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:58.578707   65578 main.go:141] libmachine: Using API Version  1
	I0927 18:37:58.578728   65578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:58.579031   65578 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:58.579258   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:37:58.579497   65578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 18:37:58.579792   65578 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:58.579832   65578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:58.595328   65578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0927 18:37:58.595838   65578 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:58.596463   65578 main.go:141] libmachine: Using API Version  1
	I0927 18:37:58.596500   65578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:58.596887   65578 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:58.597094   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:37:58.636589   65578 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 18:37:58.637776   65578 start.go:297] selected driver: kvm2
	I0927 18:37:58.637791   65578 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-477684 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:58.637888   65578 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 18:37:58.638565   65578 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:58.638633   65578 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 18:37:58.654455   65578 install.go:137] /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I0927 18:37:58.654891   65578 cni.go:84] Creating CNI manager for ""
	I0927 18:37:58.654940   65578 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:37:58.654969   65578 start.go:340] cluster config:
	{Name:kubernetes-upgrade-477684 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-477684 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 18:37:58.655076   65578 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 18:37:58.657620   65578 out.go:177] * Starting "kubernetes-upgrade-477684" primary control-plane node in "kubernetes-upgrade-477684" cluster
	I0927 18:37:58.658991   65578 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 18:37:58.659056   65578 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 18:37:58.659070   65578 cache.go:56] Caching tarball of preloaded images
	I0927 18:37:58.659160   65578 preload.go:172] Found /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0927 18:37:58.659172   65578 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0927 18:37:58.659320   65578 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json ...
	I0927 18:37:58.659550   65578 start.go:360] acquireMachinesLock for kubernetes-upgrade-477684: {Name:mk529b317123c9223f6fad4fa75a3e87c321d1a7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 18:37:58.659605   65578 start.go:364] duration metric: took 32.122µs to acquireMachinesLock for "kubernetes-upgrade-477684"
	I0927 18:37:58.659627   65578 start.go:96] Skipping create...Using existing machine configuration
	I0927 18:37:58.659635   65578 fix.go:54] fixHost starting: 
	I0927 18:37:58.659914   65578 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19712-11184/.minikube/bin/docker-machine-driver-kvm2
	I0927 18:37:58.659958   65578 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:37:58.675550   65578 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40531
	I0927 18:37:58.676105   65578 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:37:58.676606   65578 main.go:141] libmachine: Using API Version  1
	I0927 18:37:58.676630   65578 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:37:58.676963   65578 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:37:58.677221   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:37:58.677388   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetState
	I0927 18:37:58.679228   65578 fix.go:112] recreateIfNeeded on kubernetes-upgrade-477684: state=Stopped err=<nil>
	I0927 18:37:58.679260   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	W0927 18:37:58.679449   65578 fix.go:138] unexpected machine state, will restart: <nil>
	I0927 18:37:58.681941   65578 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-477684" ...
	I0927 18:37:57.487811   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 18:37:57.487850   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 18:37:57.487869   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:57.529509   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0927 18:37:57.529546   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0927 18:37:57.733938   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:57.739117   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:37:57.739143   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:37:58.233806   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:58.238231   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:37:58.238286   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:37:58.733903   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:58.742587   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0927 18:37:58.742624   65407 api_server.go:103] status: https://192.168.61.48:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0927 18:37:59.233095   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:37:59.237500   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0927 18:37:59.243936   65407 api_server.go:141] control plane version: v1.31.1
	I0927 18:37:59.243962   65407 api_server.go:131] duration metric: took 4.011097668s to wait for apiserver health ...
	I0927 18:37:59.243970   65407 cni.go:84] Creating CNI manager for ""
	I0927 18:37:59.243976   65407 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 18:37:59.246277   65407 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 18:37:59.247602   65407 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0927 18:37:59.257746   65407 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 18:37:59.275655   65407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:37:59.275739   65407 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0927 18:37:59.275768   65407 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0927 18:37:59.284975   65407 system_pods.go:59] 6 kube-system pods found
	I0927 18:37:59.285007   65407 system_pods.go:61] "coredns-7c65d6cfc9-skggj" [791b00fc-3bda-4bf9-a341-8a369bbdcc5d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0927 18:37:59.285015   65407 system_pods.go:61] "etcd-pause-670363" [242cc019-dd4b-42d0-84d8-2252d59f7ce0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0927 18:37:59.285024   65407 system_pods.go:61] "kube-apiserver-pause-670363" [3efcaba6-afa5-4f82-834e-09922f7dee83] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0927 18:37:59.285034   65407 system_pods.go:61] "kube-controller-manager-pause-670363" [a790a0b0-4114-48f1-82de-6a042d70fb3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0927 18:37:59.285041   65407 system_pods.go:61] "kube-proxy-hp2m9" [a8ff9fbb-0f43-4bf8-a3e3-315e1a325488] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0927 18:37:59.285045   65407 system_pods.go:61] "kube-scheduler-pause-670363" [948851c1-8c60-4c30-a079-871360fded9d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0927 18:37:59.285051   65407 system_pods.go:74] duration metric: took 9.3731ms to wait for pod list to return data ...
	I0927 18:37:59.285064   65407 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:37:59.288525   65407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:37:59.288559   65407 node_conditions.go:123] node cpu capacity is 2
	I0927 18:37:59.288574   65407 node_conditions.go:105] duration metric: took 3.504237ms to run NodePressure ...
	I0927 18:37:59.288599   65407 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0927 18:37:59.563984   65407 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0927 18:37:59.576010   65407 kubeadm.go:739] kubelet initialised
	I0927 18:37:59.576043   65407 kubeadm.go:740] duration metric: took 12.027246ms waiting for restarted kubelet to initialise ...
	I0927 18:37:59.576053   65407 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:37:59.583294   65407 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:37:56.781933   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:59.281521   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:37:58.683663   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .Start
	I0927 18:37:58.683975   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring networks are active...
	I0927 18:37:58.685082   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring network default is active
	I0927 18:37:58.685614   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Ensuring network mk-kubernetes-upgrade-477684 is active
	I0927 18:37:58.686066   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Getting domain xml...
	I0927 18:37:58.686903   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Creating domain...
	I0927 18:38:00.007379   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Waiting to get IP...
	I0927 18:38:00.008092   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.008535   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.008561   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:00.008498   65612 retry.go:31] will retry after 309.141514ms: waiting for machine to come up
	I0927 18:38:00.318883   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.319394   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.319418   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:00.319361   65612 retry.go:31] will retry after 380.291216ms: waiting for machine to come up
	I0927 18:38:00.701136   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.701634   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:00.701664   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:00.701597   65612 retry.go:31] will retry after 379.099705ms: waiting for machine to come up
	I0927 18:38:01.082072   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.082534   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.082562   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:01.082486   65612 retry.go:31] will retry after 521.177971ms: waiting for machine to come up
	I0927 18:38:01.605060   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.605650   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:01.605679   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:01.605599   65612 retry.go:31] will retry after 539.928688ms: waiting for machine to come up
	I0927 18:38:02.147277   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.147791   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.147820   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:02.147737   65612 retry.go:31] will retry after 681.554336ms: waiting for machine to come up
	I0927 18:38:02.830742   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.831215   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:02.831237   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:02.831197   65612 retry.go:31] will retry after 1.095543572s: waiting for machine to come up
	I0927 18:38:01.590623   65407 pod_ready.go:103] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:03.590922   65407 pod_ready.go:103] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:05.090625   65407 pod_ready.go:93] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:05.090666   65407 pod_ready.go:82] duration metric: took 5.507343687s for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:05.090680   65407 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:01.281665   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:03.281980   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:05.780476   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:03.928005   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:03.928482   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:03.928505   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:03.928442   65612 retry.go:31] will retry after 1.177666447s: waiting for machine to come up
	I0927 18:38:05.107636   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:05.108177   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:05.108198   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:05.108122   65612 retry.go:31] will retry after 1.381904362s: waiting for machine to come up
	I0927 18:38:06.491855   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:06.492292   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:06.492324   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:06.492250   65612 retry.go:31] will retry after 1.812949158s: waiting for machine to come up
	I0927 18:38:08.307099   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:08.307617   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:08.307642   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:08.307574   65612 retry.go:31] will retry after 2.684729478s: waiting for machine to come up
	I0927 18:38:07.780612   64485 pod_ready.go:103] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:09.281013   64485 pod_ready.go:93] pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.281043   64485 pod_ready.go:82] duration metric: took 38.007168667s for pod "coredns-7c65d6cfc9-pwf7q" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.281054   64485 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.282941   64485 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xvfmq" not found
	I0927 18:38:09.282964   64485 pod_ready.go:82] duration metric: took 1.903975ms for pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace to be "Ready" ...
	E0927 18:38:09.282972   64485 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xvfmq" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xvfmq" not found
	I0927 18:38:09.282979   64485 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.287595   64485 pod_ready.go:93] pod "etcd-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.287620   64485 pod_ready.go:82] duration metric: took 4.63436ms for pod "etcd-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.287633   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.293137   64485 pod_ready.go:93] pod "kube-apiserver-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.293178   64485 pod_ready.go:82] duration metric: took 5.536975ms for pod "kube-apiserver-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.293195   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.298363   64485 pod_ready.go:93] pod "kube-controller-manager-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.298390   64485 pod_ready.go:82] duration metric: took 5.184224ms for pod "kube-controller-manager-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.298402   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-vpdgz" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.478592   64485 pod_ready.go:93] pod "kube-proxy-vpdgz" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.478629   64485 pod_ready.go:82] duration metric: took 180.218179ms for pod "kube-proxy-vpdgz" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.478668   64485 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.878743   64485 pod_ready.go:93] pod "kube-scheduler-auto-268892" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:09.878773   64485 pod_ready.go:82] duration metric: took 400.089376ms for pod "kube-scheduler-auto-268892" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:09.878784   64485 pod_ready.go:39] duration metric: took 38.622865704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:09.878802   64485 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:38:09.878861   64485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:38:09.898126   64485 api_server.go:72] duration metric: took 39.439908312s to wait for apiserver process to appear ...
	I0927 18:38:09.898163   64485 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:38:09.898190   64485 api_server.go:253] Checking apiserver healthz at https://192.168.39.47:8443/healthz ...
	I0927 18:38:09.903799   64485 api_server.go:279] https://192.168.39.47:8443/healthz returned 200:
	ok
	I0927 18:38:09.905009   64485 api_server.go:141] control plane version: v1.31.1
	I0927 18:38:09.905035   64485 api_server.go:131] duration metric: took 6.864051ms to wait for apiserver health ...
	I0927 18:38:09.905045   64485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:38:10.081255   64485 system_pods.go:59] 7 kube-system pods found
	I0927 18:38:10.081287   64485 system_pods.go:61] "coredns-7c65d6cfc9-pwf7q" [a073af62-e848-4994-a501-b32b28e91435] Running
	I0927 18:38:10.081292   64485 system_pods.go:61] "etcd-auto-268892" [416c5787-5349-4451-8ad7-ee987ee333f7] Running
	I0927 18:38:10.081296   64485 system_pods.go:61] "kube-apiserver-auto-268892" [d51f6369-3be4-4029-91af-3c42b76bdd59] Running
	I0927 18:38:10.081299   64485 system_pods.go:61] "kube-controller-manager-auto-268892" [a40f8fa0-a7f4-44f3-8831-8042d8f0616b] Running
	I0927 18:38:10.081303   64485 system_pods.go:61] "kube-proxy-vpdgz" [30ed7e8e-ac3e-4b16-a5af-db83c746e06b] Running
	I0927 18:38:10.081306   64485 system_pods.go:61] "kube-scheduler-auto-268892" [4744c822-4aa2-4a4e-8d26-7d6cea12845d] Running
	I0927 18:38:10.081309   64485 system_pods.go:61] "storage-provisioner" [defb8de5-b283-4616-bcb3-0f7491746ecf] Running
	I0927 18:38:10.081315   64485 system_pods.go:74] duration metric: took 176.26413ms to wait for pod list to return data ...
	I0927 18:38:10.081321   64485 default_sa.go:34] waiting for default service account to be created ...
	I0927 18:38:10.278309   64485 default_sa.go:45] found service account: "default"
	I0927 18:38:10.278348   64485 default_sa.go:55] duration metric: took 197.021295ms for default service account to be created ...
	I0927 18:38:10.278358   64485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 18:38:10.479910   64485 system_pods.go:86] 7 kube-system pods found
	I0927 18:38:10.479940   64485 system_pods.go:89] "coredns-7c65d6cfc9-pwf7q" [a073af62-e848-4994-a501-b32b28e91435] Running
	I0927 18:38:10.479946   64485 system_pods.go:89] "etcd-auto-268892" [416c5787-5349-4451-8ad7-ee987ee333f7] Running
	I0927 18:38:10.479950   64485 system_pods.go:89] "kube-apiserver-auto-268892" [d51f6369-3be4-4029-91af-3c42b76bdd59] Running
	I0927 18:38:10.479954   64485 system_pods.go:89] "kube-controller-manager-auto-268892" [a40f8fa0-a7f4-44f3-8831-8042d8f0616b] Running
	I0927 18:38:10.479957   64485 system_pods.go:89] "kube-proxy-vpdgz" [30ed7e8e-ac3e-4b16-a5af-db83c746e06b] Running
	I0927 18:38:10.479960   64485 system_pods.go:89] "kube-scheduler-auto-268892" [4744c822-4aa2-4a4e-8d26-7d6cea12845d] Running
	I0927 18:38:10.479963   64485 system_pods.go:89] "storage-provisioner" [defb8de5-b283-4616-bcb3-0f7491746ecf] Running
	I0927 18:38:10.479969   64485 system_pods.go:126] duration metric: took 201.605948ms to wait for k8s-apps to be running ...
	I0927 18:38:10.479976   64485 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 18:38:10.480019   64485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:38:10.496795   64485 system_svc.go:56] duration metric: took 16.808087ms WaitForService to wait for kubelet
	I0927 18:38:10.496826   64485 kubeadm.go:582] duration metric: took 40.038612881s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:38:10.496846   64485 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:38:10.679220   64485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:38:10.679262   64485 node_conditions.go:123] node cpu capacity is 2
	I0927 18:38:10.679278   64485 node_conditions.go:105] duration metric: took 182.425523ms to run NodePressure ...
	I0927 18:38:10.679292   64485 start.go:241] waiting for startup goroutines ...
	I0927 18:38:10.679301   64485 start.go:246] waiting for cluster config update ...
	I0927 18:38:10.679314   64485 start.go:255] writing updated cluster config ...
	I0927 18:38:10.679605   64485 ssh_runner.go:195] Run: rm -f paused
	I0927 18:38:10.727421   64485 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 18:38:10.729528   64485 out.go:177] * Done! kubectl is now configured to use "auto-268892" cluster and "default" namespace by default
	I0927 18:38:07.097152   65407 pod_ready.go:103] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:09.097603   65407 pod_ready.go:103] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:10.994197   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:10.994940   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:10.994971   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:10.994893   65612 retry.go:31] will retry after 2.852270096s: waiting for machine to come up
	I0927 18:38:11.101203   65407 pod_ready.go:103] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"False"
	I0927 18:38:13.097874   65407 pod_ready.go:93] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.097898   65407 pod_ready.go:82] duration metric: took 8.007210789s for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.097906   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.103034   65407 pod_ready.go:93] pod "kube-apiserver-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.103060   65407 pod_ready.go:82] duration metric: took 5.146887ms for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.103073   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.610406   65407 pod_ready.go:93] pod "kube-controller-manager-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.610429   65407 pod_ready.go:82] duration metric: took 507.3481ms for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.610450   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.615894   65407 pod_ready.go:93] pod "kube-proxy-hp2m9" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.615917   65407 pod_ready.go:82] duration metric: took 5.459583ms for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.615928   65407 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.621181   65407 pod_ready.go:93] pod "kube-scheduler-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:13.621206   65407 pod_ready.go:82] duration metric: took 5.271211ms for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:13.621215   65407 pod_ready.go:39] duration metric: took 14.045152047s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:13.621244   65407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 18:38:13.636756   65407 ops.go:34] apiserver oom_adj: -16
	I0927 18:38:13.636784   65407 kubeadm.go:597] duration metric: took 21.056813311s to restartPrimaryControlPlane
	I0927 18:38:13.636795   65407 kubeadm.go:394] duration metric: took 21.446501695s to StartCluster
	I0927 18:38:13.636816   65407 settings.go:142] acquiring lock: {Name:mkff6d039accbf3a6b700685f0be6da5d78436f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:13.636906   65407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 18:38:13.637957   65407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19712-11184/kubeconfig: {Name:mkab8a7b84da200c992e38e583a7f155711252bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 18:38:13.638169   65407 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0927 18:38:13.638303   65407 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0927 18:38:13.638521   65407 config.go:182] Loaded profile config "pause-670363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:38:13.639846   65407 out.go:177] * Verifying Kubernetes components...
	I0927 18:38:13.639846   65407 out.go:177] * Enabled addons: 
	I0927 18:38:13.641887   65407 addons.go:510] duration metric: took 3.593351ms for enable addons: enabled=[]
	I0927 18:38:13.641907   65407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0927 18:38:13.799774   65407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0927 18:38:13.818406   65407 node_ready.go:35] waiting up to 6m0s for node "pause-670363" to be "Ready" ...
	I0927 18:38:13.821992   65407 node_ready.go:49] node "pause-670363" has status "Ready":"True"
	I0927 18:38:13.822023   65407 node_ready.go:38] duration metric: took 3.584287ms for node "pause-670363" to be "Ready" ...
	I0927 18:38:13.822034   65407 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:13.897862   65407 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.295391   65407 pod_ready.go:93] pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:14.295416   65407 pod_ready.go:82] duration metric: took 397.530639ms for pod "coredns-7c65d6cfc9-skggj" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.295426   65407 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.694931   65407 pod_ready.go:93] pod "etcd-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:14.694962   65407 pod_ready.go:82] duration metric: took 399.52994ms for pod "etcd-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:14.694975   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.095910   65407 pod_ready.go:93] pod "kube-apiserver-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:15.095938   65407 pod_ready.go:82] duration metric: took 400.954032ms for pod "kube-apiserver-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.095951   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.495173   65407 pod_ready.go:93] pod "kube-controller-manager-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:15.495198   65407 pod_ready.go:82] duration metric: took 399.238887ms for pod "kube-controller-manager-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.495209   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.895665   65407 pod_ready.go:93] pod "kube-proxy-hp2m9" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:15.895704   65407 pod_ready.go:82] duration metric: took 400.486882ms for pod "kube-proxy-hp2m9" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:15.895720   65407 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:16.294938   65407 pod_ready.go:93] pod "kube-scheduler-pause-670363" in "kube-system" namespace has status "Ready":"True"
	I0927 18:38:16.294971   65407 pod_ready.go:82] duration metric: took 399.242542ms for pod "kube-scheduler-pause-670363" in "kube-system" namespace to be "Ready" ...
	I0927 18:38:16.294983   65407 pod_ready.go:39] duration metric: took 2.472936854s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 18:38:16.295001   65407 api_server.go:52] waiting for apiserver process to appear ...
	I0927 18:38:16.295051   65407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:38:16.310200   65407 api_server.go:72] duration metric: took 2.672005153s to wait for apiserver process to appear ...
	I0927 18:38:16.310230   65407 api_server.go:88] waiting for apiserver healthz status ...
	I0927 18:38:16.310250   65407 api_server.go:253] Checking apiserver healthz at https://192.168.61.48:8443/healthz ...
	I0927 18:38:16.315615   65407 api_server.go:279] https://192.168.61.48:8443/healthz returned 200:
	ok
	I0927 18:38:16.316400   65407 api_server.go:141] control plane version: v1.31.1
	I0927 18:38:16.316416   65407 api_server.go:131] duration metric: took 6.180781ms to wait for apiserver health ...
	I0927 18:38:16.316424   65407 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 18:38:16.497011   65407 system_pods.go:59] 6 kube-system pods found
	I0927 18:38:16.497038   65407 system_pods.go:61] "coredns-7c65d6cfc9-skggj" [791b00fc-3bda-4bf9-a341-8a369bbdcc5d] Running
	I0927 18:38:16.497043   65407 system_pods.go:61] "etcd-pause-670363" [242cc019-dd4b-42d0-84d8-2252d59f7ce0] Running
	I0927 18:38:16.497047   65407 system_pods.go:61] "kube-apiserver-pause-670363" [3efcaba6-afa5-4f82-834e-09922f7dee83] Running
	I0927 18:38:16.497051   65407 system_pods.go:61] "kube-controller-manager-pause-670363" [a790a0b0-4114-48f1-82de-6a042d70fb3a] Running
	I0927 18:38:16.497054   65407 system_pods.go:61] "kube-proxy-hp2m9" [a8ff9fbb-0f43-4bf8-a3e3-315e1a325488] Running
	I0927 18:38:16.497057   65407 system_pods.go:61] "kube-scheduler-pause-670363" [948851c1-8c60-4c30-a079-871360fded9d] Running
	I0927 18:38:16.497063   65407 system_pods.go:74] duration metric: took 180.633999ms to wait for pod list to return data ...
	I0927 18:38:16.497070   65407 default_sa.go:34] waiting for default service account to be created ...
	I0927 18:38:16.695791   65407 default_sa.go:45] found service account: "default"
	I0927 18:38:16.695814   65407 default_sa.go:55] duration metric: took 198.739486ms for default service account to be created ...
	I0927 18:38:16.695823   65407 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 18:38:16.897658   65407 system_pods.go:86] 6 kube-system pods found
	I0927 18:38:16.897686   65407 system_pods.go:89] "coredns-7c65d6cfc9-skggj" [791b00fc-3bda-4bf9-a341-8a369bbdcc5d] Running
	I0927 18:38:16.897692   65407 system_pods.go:89] "etcd-pause-670363" [242cc019-dd4b-42d0-84d8-2252d59f7ce0] Running
	I0927 18:38:16.897696   65407 system_pods.go:89] "kube-apiserver-pause-670363" [3efcaba6-afa5-4f82-834e-09922f7dee83] Running
	I0927 18:38:16.897700   65407 system_pods.go:89] "kube-controller-manager-pause-670363" [a790a0b0-4114-48f1-82de-6a042d70fb3a] Running
	I0927 18:38:16.897703   65407 system_pods.go:89] "kube-proxy-hp2m9" [a8ff9fbb-0f43-4bf8-a3e3-315e1a325488] Running
	I0927 18:38:16.897706   65407 system_pods.go:89] "kube-scheduler-pause-670363" [948851c1-8c60-4c30-a079-871360fded9d] Running
	I0927 18:38:16.897712   65407 system_pods.go:126] duration metric: took 201.884122ms to wait for k8s-apps to be running ...
	I0927 18:38:16.897718   65407 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 18:38:16.897766   65407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:38:16.911909   65407 system_svc.go:56] duration metric: took 14.182843ms WaitForService to wait for kubelet
	I0927 18:38:16.911937   65407 kubeadm.go:582] duration metric: took 3.273747111s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 18:38:16.911955   65407 node_conditions.go:102] verifying NodePressure condition ...
	I0927 18:38:17.096056   65407 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0927 18:38:17.096082   65407 node_conditions.go:123] node cpu capacity is 2
	I0927 18:38:17.096095   65407 node_conditions.go:105] duration metric: took 184.133998ms to run NodePressure ...
	I0927 18:38:17.096108   65407 start.go:241] waiting for startup goroutines ...
	I0927 18:38:17.096116   65407 start.go:246] waiting for cluster config update ...
	I0927 18:38:17.096126   65407 start.go:255] writing updated cluster config ...
	I0927 18:38:17.096430   65407 ssh_runner.go:195] Run: rm -f paused
	I0927 18:38:17.142318   65407 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 18:38:17.144415   65407 out.go:177] * Done! kubectl is now configured to use "pause-670363" cluster and "default" namespace by default
	I0927 18:38:13.848528   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:13.849052   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | unable to find current IP address of domain kubernetes-upgrade-477684 in network mk-kubernetes-upgrade-477684
	I0927 18:38:13.849088   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | I0927 18:38:13.849017   65612 retry.go:31] will retry after 3.744623631s: waiting for machine to come up
	I0927 18:38:17.596059   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.596665   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Found IP for machine: 192.168.50.36
	I0927 18:38:17.596711   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has current primary IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.596724   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Reserving static IP address...
	I0927 18:38:17.597184   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-477684", mac: "52:54:00:3f:58:c1", ip: "192.168.50.36"} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:17.597210   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | skip adding static IP to network mk-kubernetes-upgrade-477684 - found existing host DHCP lease matching {name: "kubernetes-upgrade-477684", mac: "52:54:00:3f:58:c1", ip: "192.168.50.36"}
	I0927 18:38:17.597223   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Reserved static IP address: 192.168.50.36
	I0927 18:38:17.597238   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Waiting for SSH to be available...
	I0927 18:38:17.597254   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Getting to WaitForSSH function...
	I0927 18:38:17.599993   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.600334   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:17.600363   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.600655   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using SSH client type: external
	I0927 18:38:17.600681   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | Using SSH private key: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa (-rw-------)
	I0927 18:38:17.600732   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.36 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0927 18:38:17.600745   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | About to run SSH command:
	I0927 18:38:17.600768   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | exit 0
	I0927 18:38:17.731543   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | SSH cmd err, output: <nil>: 
	I0927 18:38:17.731942   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetConfigRaw
	I0927 18:38:17.732555   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:38:17.734923   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.735216   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:17.735254   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.735503   65578 profile.go:143] Saving config to /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kubernetes-upgrade-477684/config.json ...
	I0927 18:38:17.735746   65578 machine.go:93] provisionDockerMachine start ...
	I0927 18:38:17.735768   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .DriverName
	I0927 18:38:17.736017   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:38:17.737988   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.738276   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:17.738303   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.738439   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:38:17.738632   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:17.738803   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:17.738919   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:38:17.739135   65578 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:17.739384   65578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:38:17.739400   65578 main.go:141] libmachine: About to run SSH command:
	hostname
	I0927 18:38:17.859877   65578 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0927 18:38:17.859908   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:38:17.860146   65578 buildroot.go:166] provisioning hostname "kubernetes-upgrade-477684"
	I0927 18:38:17.860170   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:38:17.860406   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:38:17.863251   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.863631   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:17.863674   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.863853   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:38:17.864036   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:17.864265   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:17.864478   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:38:17.864666   65578 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:17.864834   65578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:38:17.864846   65578 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-477684 && echo "kubernetes-upgrade-477684" | sudo tee /etc/hostname
	I0927 18:38:17.996390   65578 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-477684
	
	I0927 18:38:17.996434   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:38:17.999241   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.999666   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:17.999705   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:17.999865   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:38:18.000098   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:18.000298   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:18.000481   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:38:18.000681   65578 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:18.000845   65578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:38:18.000861   65578 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-477684' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-477684/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-477684' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0927 18:38:18.125546   65578 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0927 18:38:18.125607   65578 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19712-11184/.minikube CaCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19712-11184/.minikube}
	I0927 18:38:18.125642   65578 buildroot.go:174] setting up certificates
	I0927 18:38:18.125655   65578 provision.go:84] configureAuth start
	I0927 18:38:18.125667   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetMachineName
	I0927 18:38:18.125938   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetIP
	I0927 18:38:18.128837   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.129219   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:18.129250   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.129382   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:38:18.132206   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.132656   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:18.132686   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.132814   65578 provision.go:143] copyHostCerts
	I0927 18:38:18.132877   65578 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem, removing ...
	I0927 18:38:18.132889   65578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem
	I0927 18:38:18.132955   65578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/key.pem (1671 bytes)
	I0927 18:38:18.133086   65578 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem, removing ...
	I0927 18:38:18.133098   65578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem
	I0927 18:38:18.133128   65578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/ca.pem (1082 bytes)
	I0927 18:38:18.133195   65578 exec_runner.go:144] found /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem, removing ...
	I0927 18:38:18.133203   65578 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem
	I0927 18:38:18.133224   65578 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19712-11184/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19712-11184/.minikube/cert.pem (1123 bytes)
	I0927 18:38:18.133284   65578 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-477684 san=[127.0.0.1 192.168.50.36 kubernetes-upgrade-477684 localhost minikube]
	I0927 18:38:18.204198   65578 provision.go:177] copyRemoteCerts
	I0927 18:38:18.204263   65578 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0927 18:38:18.204299   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:38:18.207286   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.207645   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:18.207683   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.207947   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:38:18.208157   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:18.208352   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:38:18.208518   65578 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/kubernetes-upgrade-477684/id_rsa Username:docker}
	I0927 18:38:18.300290   65578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0927 18:38:18.331121   65578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0927 18:38:18.358896   65578 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19712-11184/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0927 18:38:18.384219   65578 provision.go:87] duration metric: took 258.552904ms to configureAuth
	I0927 18:38:18.384249   65578 buildroot.go:189] setting minikube options for container-runtime
	I0927 18:38:18.384481   65578 config.go:182] Loaded profile config "kubernetes-upgrade-477684": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:38:18.384565   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHHostname
	I0927 18:38:18.387693   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.388042   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:58:c1", ip: ""} in network mk-kubernetes-upgrade-477684: {Iface:virbr2 ExpiryTime:2024-09-27 19:38:10 +0000 UTC Type:0 Mac:52:54:00:3f:58:c1 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:kubernetes-upgrade-477684 Clientid:01:52:54:00:3f:58:c1}
	I0927 18:38:18.388089   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) DBG | domain kubernetes-upgrade-477684 has defined IP address 192.168.50.36 and MAC address 52:54:00:3f:58:c1 in network mk-kubernetes-upgrade-477684
	I0927 18:38:18.388235   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHPort
	I0927 18:38:18.388420   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:18.388565   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHKeyPath
	I0927 18:38:18.388735   65578 main.go:141] libmachine: (kubernetes-upgrade-477684) Calling .GetSSHUsername
	I0927 18:38:18.388913   65578 main.go:141] libmachine: Using SSH client type: native
	I0927 18:38:18.389121   65578 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I0927 18:38:18.389136   65578 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.797756245Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462299797717232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b6a0920-be43-486f-8a87-57a225aea5ac name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.798392104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=412a376c-b17d-480e-9b3a-9e6a35eb3324 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.798489008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=412a376c-b17d-480e-9b3a-9e6a35eb3324 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.798843776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=412a376c-b17d-480e-9b3a-9e6a35eb3324 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.843508832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e3d3743-e669-4524-b19f-4c0a8e2a3452 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.843608268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e3d3743-e669-4524-b19f-4c0a8e2a3452 name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.844817421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93fd1803-708b-405e-8fa5-100e097d4df2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.845234205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462299845207424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93fd1803-708b-405e-8fa5-100e097d4df2 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.845993846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78d1a860-7c89-48f4-9219-a23e1d9c521d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.846060410Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78d1a860-7c89-48f4-9219-a23e1d9c521d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.846350112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78d1a860-7c89-48f4-9219-a23e1d9c521d name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.887814286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2534fd6a-b924-4d34-87f3-6f459b2bc21e name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.887910200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2534fd6a-b924-4d34-87f3-6f459b2bc21e name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.889179083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b05853a-16f0-4716-bbac-9aa2556b48f4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.889779787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462299889745592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b05853a-16f0-4716-bbac-9aa2556b48f4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.890518387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3ce5e54-3d29-47b0-b1a9-1321b5530214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.890612688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3ce5e54-3d29-47b0-b1a9-1321b5530214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.890957165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3ce5e54-3d29-47b0-b1a9-1321b5530214 name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.935439587Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64151ae5-3b5e-4ffe-8030-085d69e7ce3e name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.935520319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64151ae5-3b5e-4ffe-8030-085d69e7ce3e name=/runtime.v1.RuntimeService/Version
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.937045541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71cbb8e9-2ed7-4a0e-8328-bbcc6da0d23d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.937603522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462299937574163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71cbb8e9-2ed7-4a0e-8328-bbcc6da0d23d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.938340363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2ffaf4d-9a3c-4215-abec-a6b05c40cd2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.938415447Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2ffaf4d-9a3c-4215-abec-a6b05c40cd2a name=/runtime.v1.RuntimeService/ListContainers
	Sep 27 18:38:19 pause-670363 crio[2882]: time="2024-09-27 18:38:19.938675005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a,PodSandboxId:aa63a4aeb3e7c409bea0b485b5d34409bbbcfbb5a52115b1e0a9df222efea47b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1727462278486772541,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb,PodSandboxId:6e17bf7a2dcaae30fac1c1f5dc7062a231dda3ca107143634b21d8b5aca47e8f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1727462278458137345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120,PodSandboxId:3457adafd6cfa08938ad398246d60d4faf1bb438622ce46e381852074410f5ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1727462274620154848,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kub
ernetes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad,PodSandboxId:1206c72258a21027492edafe102f0eadc0be311f751c049eb788c919c2805d65,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1727462274623857549,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6,PodSandboxId:6b7506be02cca5808a465f52aaa643dd895854037dfb3ed3fd514e468ec22a31,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1727462274598822036,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3e3fbc777bd4a22a9a
38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603,PodSandboxId:7198c268b5c2e9006f1e80aeff502c56403d2032a1bc2a1a2fb5fde58e8d7688,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1727462274584975455,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io
.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb,PodSandboxId:04b6569b50341c984c0e4994f3aa070a6014e6205c4e20bb7b107a250ca9b797,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1727462270344356148,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-skggj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791b00fc-3bda-4bf9-a341-8a369bbdcc5d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a
204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1,PodSandboxId:bcd315f540bcadcbfb4f57af40171db514a8412f655a8ae472f93046b763ea65,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1727462269505123402,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.po
d.name: kube-proxy-hp2m9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ff9fbb-0f43-4bf8-a3e3-315e1a325488,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0,PodSandboxId:2b26b192fcfb0f437c57233da8deb6c734145f6ed2433de13302d2868de97482,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1727462269683948732,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dfe64d11fe072237410f484f82c3395,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9,PodSandboxId:f97a33d9f6d751df0232aac275613de6ff99a125fac2eb0dcd2f426cb36737c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1727462269602544456,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-670363,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: cb093c2322a6e3d05b8d8764dbfa3141,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d,PodSandboxId:45d03377a3031edf4f94c0b863b2adc63b798917a34741e5fc0edfe90787343d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1727462269540979325,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: b3e3fbc777bd4a22a9a38339d5fd10b7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596,PodSandboxId:439fef07f579add46fe7682fa6d69ccb6b0fd4487ef269075b5e43d011ecb8da,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1727462269479077557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-670363,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: d40994cf409da134ffe3d631f20b6f88,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2ffaf4d-9a3c-4215-abec-a6b05c40cd2a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b2c71d5f3f50d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   21 seconds ago      Running             coredns                   2                   aa63a4aeb3e7c       coredns-7c65d6cfc9-skggj
	158b56532d58b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   21 seconds ago      Running             kube-proxy                2                   6e17bf7a2dcaa       kube-proxy-hp2m9
	77efc66c2b7d6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   25 seconds ago      Running             kube-apiserver            2                   1206c72258a21       kube-apiserver-pause-670363
	a9ddc858a72cf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   25 seconds ago      Running             kube-controller-manager   2                   3457adafd6cfa       kube-controller-manager-pause-670363
	44c1762aae670       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   25 seconds ago      Running             kube-scheduler            2                   6b7506be02cca       kube-scheduler-pause-670363
	5f30f087b7b82       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago      Running             etcd                      2                   7198c268b5c2e       etcd-pause-670363
	953dd63cf444b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   29 seconds ago      Exited              coredns                   1                   04b6569b50341       coredns-7c65d6cfc9-skggj
	5a418e42dbad6       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   30 seconds ago      Exited              kube-apiserver            1                   2b26b192fcfb0       kube-apiserver-pause-670363
	0e3f13ae3ea85       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   30 seconds ago      Exited              etcd                      1                   f97a33d9f6d75       etcd-pause-670363
	dcd58c9354a7a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   30 seconds ago      Exited              kube-scheduler            1                   45d03377a3031       kube-scheduler-pause-670363
	cc44af5da4832       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   30 seconds ago      Exited              kube-proxy                1                   bcd315f540bca       kube-proxy-hp2m9
	2d0fb0b055331       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   30 seconds ago      Exited              kube-controller-manager   1                   439fef07f579a       kube-controller-manager-pause-670363
	
	
	==> coredns [953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb] <==
	
	
	==> coredns [b2c71d5f3f50d95ff03462bbdc9b290c78f29881a14478227f93a83087090c9a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:38946 - 28035 "HINFO IN 4201034557618249852.7739285813253681806. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012315836s
	
	
	==> describe nodes <==
	Name:               pause-670363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-670363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0bb8598e36db8db200944ab6842cd553a7bd60c
	                    minikube.k8s.io/name=pause-670363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T18_37_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 18:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-670363
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 18:38:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:36:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:36:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:36:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 18:37:57 +0000   Fri, 27 Sep 2024 18:37:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.48
	  Hostname:    pause-670363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc2f9bb4aff641819f1740156b9a7a17
	  System UUID:                fc2f9bb4-aff6-4181-9f17-40156b9a7a17
	  Boot ID:                    4e86b357-0ea3-4670-a8be-f8f6638c2026
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-skggj                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     75s
	  kube-system                 etcd-pause-670363                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         80s
	  kube-system                 kube-apiserver-pause-670363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-controller-manager-pause-670363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-proxy-hp2m9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-670363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 73s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     80s                kubelet          Node pause-670363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  80s                kubelet          Node pause-670363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s                kubelet          Node pause-670363 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeReady                79s                kubelet          Node pause-670363 status is now: NodeReady
	  Normal  RegisteredNode           76s                node-controller  Node pause-670363 event: Registered Node pause-670363 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-670363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-670363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-670363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-670363 event: Registered Node pause-670363 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.990979] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.060751] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053433] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.204507] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.122328] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.290366] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +4.235310] systemd-fstab-generator[739]: Ignoring "noauto" option for root device
	[  +3.976728] systemd-fstab-generator[870]: Ignoring "noauto" option for root device
	[  +0.059511] kauditd_printk_skb: 158 callbacks suppressed
	[Sep27 18:37] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.078979] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.309275] systemd-fstab-generator[1335]: Ignoring "noauto" option for root device
	[  +0.062282] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.031990] kauditd_printk_skb: 88 callbacks suppressed
	[ +31.087444] systemd-fstab-generator[2241]: Ignoring "noauto" option for root device
	[  +0.246235] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.339189] systemd-fstab-generator[2497]: Ignoring "noauto" option for root device
	[  +0.280809] systemd-fstab-generator[2628]: Ignoring "noauto" option for root device
	[  +0.551296] systemd-fstab-generator[2778]: Ignoring "noauto" option for root device
	[  +1.100075] systemd-fstab-generator[3115]: Ignoring "noauto" option for root device
	[  +2.388716] systemd-fstab-generator[3549]: Ignoring "noauto" option for root device
	[  +0.276088] kauditd_printk_skb: 266 callbacks suppressed
	[Sep27 18:38] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.507659] systemd-fstab-generator[3986]: Ignoring "noauto" option for root device
	
	
	==> etcd [0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9] <==
	{"level":"info","ts":"2024-09-27T18:37:50.250673Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-09-27T18:37:50.286377Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","commit-index":416}
	{"level":"info","ts":"2024-09-27T18:37:50.294161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 switched to configuration voters=()"}
	{"level":"info","ts":"2024-09-27T18:37:50.297381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became follower at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:50.297476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f76d6fbad492a1d6 [peers: [], term: 2, commit: 416, applied: 0, lastindex: 416, lastterm: 2]"}
	{"level":"warn","ts":"2024-09-27T18:37:50.317146Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-09-27T18:37:50.354574Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":400}
	{"level":"info","ts":"2024-09-27T18:37:50.403415Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-09-27T18:37:50.416618Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"f76d6fbad492a1d6","timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:37:50.417002Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"f76d6fbad492a1d6"}
	{"level":"info","ts":"2024-09-27T18:37:50.417045Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"f76d6fbad492a1d6","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-09-27T18:37:50.456575Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T18:37:50.456798Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:50.456841Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:50.456848Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:50.467956Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:37:50.488628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 switched to configuration voters=(17829029348050641366)"}
	{"level":"info","ts":"2024-09-27T18:37:50.488732Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","added-peer-id":"f76d6fbad492a1d6","added-peer-peer-urls":["https://192.168.61.48:2380"]}
	{"level":"info","ts":"2024-09-27T18:37:50.488857Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:50.488903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:50.527726Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-27T18:37:50.533505Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f76d6fbad492a1d6","initial-advertise-peer-urls":["https://192.168.61.48:2380"],"listen-peer-urls":["https://192.168.61.48:2380"],"advertise-client-urls":["https://192.168.61.48:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.48:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-27T18:37:50.535309Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:37:50.535419Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.48:2380"}
	{"level":"info","ts":"2024-09-27T18:37:50.536692Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.48:2380"}
	
	
	==> etcd [5f30f087b7b825953413fa1ce1960eef8a40591791d2a2e373de1e1015d6e603] <==
	{"level":"info","ts":"2024-09-27T18:37:54.913251Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:54.926447Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:54.926483Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-27T18:37:54.915592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 switched to configuration voters=(17829029348050641366)"}
	{"level":"info","ts":"2024-09-27T18:37:54.926677Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","added-peer-id":"f76d6fbad492a1d6","added-peer-peer-urls":["https://192.168.61.48:2380"]}
	{"level":"info","ts":"2024-09-27T18:37:54.926786Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f0fba60f4785994","local-member-id":"f76d6fbad492a1d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:54.927349Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T18:37:54.913110Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-09-27T18:37:54.923546Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-27T18:37:55.879994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:55.880207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:55.880365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 received MsgPreVoteResp from f76d6fbad492a1d6 at term 2"}
	{"level":"info","ts":"2024-09-27T18:37:55.880480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became candidate at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.880551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 received MsgVoteResp from f76d6fbad492a1d6 at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.880612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f76d6fbad492a1d6 became leader at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.880642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f76d6fbad492a1d6 elected leader f76d6fbad492a1d6 at term 3"}
	{"level":"info","ts":"2024-09-27T18:37:55.883469Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f76d6fbad492a1d6","local-member-attributes":"{Name:pause-670363 ClientURLs:[https://192.168.61.48:2379]}","request-path":"/0/members/f76d6fbad492a1d6/attributes","cluster-id":"6f0fba60f4785994","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T18:37:55.883664Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:37:55.884329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T18:37:55.885697Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:37:55.887167Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.48:2379"}
	{"level":"info","ts":"2024-09-27T18:37:55.888178Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T18:37:55.889698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T18:37:55.890309Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T18:37:55.890339Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:38:20 up 1 min,  0 users,  load average: 0.87, 0.34, 0.13
	Linux pause-670363 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0] <==
	I0927 18:37:50.466208       1 options.go:228] external host was not specified, using 192.168.61.48
	I0927 18:37:50.514324       1 server.go:142] Version: v1.31.1
	I0927 18:37:50.514411       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [77efc66c2b7d606c1665a2adcbf68ee44d3bac42654cd85943b6522dd8eecbad] <==
	I0927 18:37:57.536581       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0927 18:37:57.536689       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0927 18:37:57.537317       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0927 18:37:57.537804       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0927 18:37:57.544933       1 shared_informer.go:320] Caches are synced for configmaps
	I0927 18:37:57.546188       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0927 18:37:57.545350       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0927 18:37:57.560439       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0927 18:37:57.569404       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0927 18:37:57.577817       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0927 18:37:57.579130       1 aggregator.go:171] initial CRD sync complete...
	I0927 18:37:57.579202       1 autoregister_controller.go:144] Starting autoregister controller
	I0927 18:37:57.579228       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0927 18:37:57.579293       1 cache.go:39] Caches are synced for autoregister controller
	I0927 18:37:57.586579       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0927 18:37:57.586615       1 policy_source.go:224] refreshing policies
	I0927 18:37:57.655371       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0927 18:37:58.438695       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0927 18:37:59.415934       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0927 18:37:59.432382       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0927 18:37:59.475578       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0927 18:37:59.510623       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0927 18:37:59.526019       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0927 18:38:00.876868       1 controller.go:615] quota admission added evaluator for: endpoints
	I0927 18:38:01.169692       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596] <==
	
	
	==> kube-controller-manager [a9ddc858a72cf551b0cb59938ab15cb8762dbec57d5be7da7063add0e4941120] <==
	I0927 18:38:00.919457       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0927 18:38:00.919463       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0927 18:38:00.919543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-670363"
	I0927 18:38:00.922154       1 shared_informer.go:320] Caches are synced for daemon sets
	I0927 18:38:00.925078       1 shared_informer.go:320] Caches are synced for job
	I0927 18:38:00.925170       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0927 18:38:00.927814       1 shared_informer.go:320] Caches are synced for GC
	I0927 18:38:00.931516       1 shared_informer.go:320] Caches are synced for persistent volume
	I0927 18:38:00.981300       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 18:38:00.992571       1 shared_informer.go:320] Caches are synced for resource quota
	I0927 18:38:01.016327       1 shared_informer.go:320] Caches are synced for disruption
	I0927 18:38:01.017827       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0927 18:38:01.021457       1 shared_informer.go:320] Caches are synced for crt configmap
	I0927 18:38:01.030489       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0927 18:38:01.036338       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0927 18:38:01.036628       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0927 18:38:01.036675       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0927 18:38:01.036747       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0927 18:38:01.133218       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="265.721177ms"
	I0927 18:38:01.133514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.334µs"
	I0927 18:38:01.533606       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 18:38:01.567397       1 shared_informer.go:320] Caches are synced for garbage collector
	I0927 18:38:01.567544       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0927 18:38:04.733949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.775218ms"
	I0927 18:38:04.734281       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="95.42µs"
	
	
	==> kube-proxy [158b56532d58b7f435ad68d8d7b230c1b0d6d2e144b0696c2bea8448e2aed0fb] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0927 18:37:58.759439       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0927 18:37:58.776890       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.48"]
	E0927 18:37:58.777464       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 18:37:58.835439       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0927 18:37:58.835486       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0927 18:37:58.835515       1 server_linux.go:169] "Using iptables Proxier"
	I0927 18:37:58.840454       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 18:37:58.840751       1 server.go:483] "Version info" version="v1.31.1"
	I0927 18:37:58.840793       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:37:58.842588       1 config.go:199] "Starting service config controller"
	I0927 18:37:58.847806       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 18:37:58.842761       1 config.go:105] "Starting endpoint slice config controller"
	I0927 18:37:58.852903       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 18:37:58.852912       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0927 18:37:58.844222       1 config.go:328] "Starting node config controller"
	I0927 18:37:58.852942       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 18:37:58.852946       1 shared_informer.go:320] Caches are synced for node config
	I0927 18:37:58.948626       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1] <==
	
	
	==> kube-scheduler [44c1762aae670de440aa108d6c7011fc668375d64120f3edfc00ac3507ee12d6] <==
	I0927 18:37:56.213241       1 serving.go:386] Generated self-signed cert in-memory
	W0927 18:37:57.510862       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0927 18:37:57.511003       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0927 18:37:57.511034       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0927 18:37:57.511104       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0927 18:37:57.569137       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0927 18:37:57.570010       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 18:37:57.572242       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0927 18:37:57.572434       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0927 18:37:57.572536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0927 18:37:57.577894       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0927 18:37:57.678506       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d] <==
	
	
	==> kubelet <==
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.383686    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d40994cf409da134ffe3d631f20b6f88-kubeconfig\") pod \"kube-controller-manager-pause-670363\" (UID: \"d40994cf409da134ffe3d631f20b6f88\") " pod="kube-system/kube-controller-manager-pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.383701    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d40994cf409da134ffe3d631f20b6f88-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-670363\" (UID: \"d40994cf409da134ffe3d631f20b6f88\") " pod="kube-system/kube-controller-manager-pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.542897    3556 kubelet_node_status.go:72] "Attempting to register node" node="pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: E0927 18:37:54.543833    3556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.48:8443: connect: connection refused" node="pause-670363"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.571472    3556 scope.go:117] "RemoveContainer" containerID="dcd58c9354a7a56a967ae413f1c72b32cbc69469098fec7a0fac35c34073697d"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.573462    3556 scope.go:117] "RemoveContainer" containerID="0e3f13ae3ea85f10f416f43808b62dfd332d3f0d3c73c007b54500fe828109b9"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.575792    3556 scope.go:117] "RemoveContainer" containerID="2d0fb0b055331b3e36b948af5adb49f51f7bb0a07e7f60539f246ced96dce596"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.577418    3556 scope.go:117] "RemoveContainer" containerID="5a418e42dbad6a23bdd18ac26ec8b853fefbf83c5c31771c219ba7be861b1ba0"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: E0927 18:37:54.764798    3556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-670363?timeout=10s\": dial tcp 192.168.61.48:8443: connect: connection refused" interval="800ms"
	Sep 27 18:37:54 pause-670363 kubelet[3556]: I0927 18:37:54.945144    3556 kubelet_node_status.go:72] "Attempting to register node" node="pause-670363"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.648526    3556 kubelet_node_status.go:111] "Node was previously registered" node="pause-670363"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.648642    3556 kubelet_node_status.go:75] "Successfully registered node" node="pause-670363"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.648676    3556 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 27 18:37:57 pause-670363 kubelet[3556]: I0927 18:37:57.650097    3556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.135008    3556 apiserver.go:52] "Watching apiserver"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.160931    3556 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.260703    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8ff9fbb-0f43-4bf8-a3e3-315e1a325488-lib-modules\") pod \"kube-proxy-hp2m9\" (UID: \"a8ff9fbb-0f43-4bf8-a3e3-315e1a325488\") " pod="kube-system/kube-proxy-hp2m9"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.260803    3556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8ff9fbb-0f43-4bf8-a3e3-315e1a325488-xtables-lock\") pod \"kube-proxy-hp2m9\" (UID: \"a8ff9fbb-0f43-4bf8-a3e3-315e1a325488\") " pod="kube-system/kube-proxy-hp2m9"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.439977    3556 scope.go:117] "RemoveContainer" containerID="953dd63cf444b77941fb248a7bd1affa8d8a8b68aa7ba161487fc95bdabfd7eb"
	Sep 27 18:37:58 pause-670363 kubelet[3556]: I0927 18:37:58.441490    3556 scope.go:117] "RemoveContainer" containerID="cc44af5da4832ffc8e6a90645e7350c69675411728a82de9a41eec15ad4d6fc1"
	Sep 27 18:38:04 pause-670363 kubelet[3556]: E0927 18:38:04.243433    3556 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462284243051232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:38:04 pause-670363 kubelet[3556]: E0927 18:38:04.243730    3556 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462284243051232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:38:04 pause-670363 kubelet[3556]: I0927 18:38:04.694121    3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 27 18:38:14 pause-670363 kubelet[3556]: E0927 18:38:14.247570    3556 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462294247019072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 27 18:38:14 pause-670363 kubelet[3556]: E0927 18:38:14.247875    3556 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727462294247019072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-670363 -n pause-670363
helpers_test.go:261: (dbg) Run:  kubectl --context pause-670363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (40.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.053s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:51:40.069513   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:51:53.916778   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/flannel-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:52:12.479689   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/enable-default-cni-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:52:40.182938   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/enable-default-cni-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:52:46.570893   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/bridge-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:53:11.163349   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/auto-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:53:14.273320   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/bridge-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:54:36.666454   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/calico-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:55:06.208251   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/custom-flannel-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:55:16.997557   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
E0927 18:55:48.476766   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/kindnet-268892/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.162:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.162:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (19m46s)
		TestNetworkPlugins/group (12m35s)
		TestStartStop (19m36s)
		TestStartStop/group/default-k8s-diff-port (12m35s)
		TestStartStop/group/default-k8s-diff-port/serial (12m35s)
		TestStartStop/group/default-k8s-diff-port/serial/SecondStart (8m46s)
		TestStartStop/group/embed-certs (13m12s)
		TestStartStop/group/embed-certs/serial (13m12s)
		TestStartStop/group/embed-certs/serial/SecondStart (9m8s)
		TestStartStop/group/no-preload (13m52s)
		TestStartStop/group/no-preload/serial (13m52s)
		TestStartStop/group/no-preload/serial/SecondStart (9m26s)
		TestStartStop/group/old-k8s-version (19m12s)
		TestStartStop/group/old-k8s-version/serial (19m12s)
		TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (4m12s)

                                                
                                                
goroutine 3943 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0007621a0, 0xc00083bbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc00076c0c0, {0x4590140, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x464c680?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00075cc80)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00075cc80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001c4900)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2771 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0012c9750, 0xc0014c0f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x5c?, 0xc0012c9750, 0xc0012c9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc00198be00?, 0xc001b03340?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012c97d0?, 0x593ba4?, 0xc001769c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2756
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2194 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000763520, 0x2f12e70)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1762
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 407 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 406
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1915 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0005474a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1666 +0x5e5
testing.tRunner(0xc000762680, 0xc0014a4348)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 1659
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1659 [chan receive, 20 minutes]:
testing.(*T).Run(0xc001e06340, {0x258dd97?, 0x55917c?}, 0xc0014a4348)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001e06340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001e06340, 0x2f12c30)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2357 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000951a40, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2944 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2940
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2143 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc000b5df50, 0xc000b5df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0xc0?, 0xc000b5df50, 0xc000b5df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc0013fd040?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00050bfd0?, 0x593ba4?, 0xc0000656c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 211 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7fbc10ba20b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00050e180?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00050e180)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc00050e180)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00075f780)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00075f780)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000231590, {0x3226c50, 0xc00075f780})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000231590)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc001e06000?, 0xc001e064e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 128
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 2196 [chan receive, 20 minutes]:
testing.(*testContext).waitParallel(0xc0005474a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000763a00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000763a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000763a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000763a00, 0xc0018505c0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2194
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2792 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc001d05750, 0xc0017f4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0xa0?, 0xc001d05750, 0xc001d05798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc001e06820?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc001e34480?, 0xc000b162a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3231 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0004ecb60, {0x259982f?, 0x2554ee0?}, 0xc00162e280)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0004ecb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0004ecb60, 0xc001b18080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2197
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3398 [select, 10 minutes]:
os/exec.(*Cmd).watchCtx(0xc00165c480, 0xc00077e9a0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3395
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 850 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc00186fc20)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 815
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 2296 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0016e8820, {0x25b305f?, 0xc001d04570?}, 0xc0015e0100)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0016e8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0016e8820, 0xc001ad2100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2195
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3397 [IO wait]:
internal/poll.runtime_pollWait(0x7fbc10ba1fa8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001e11aa0?, 0xc002283622?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e11aa0, {0xc002283622, 0x1a9de, 0x1a9de})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0007207a8, {0xc002283622?, 0x5?, 0x20000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0014f4a80, {0x320d920, 0xc0008820e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320daa0, 0xc0014f4a80}, {0x320d920, 0xc0008820e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0007207a8?, {0x320daa0, 0xc0014f4a80})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0007207a8, {0x320daa0, 0xc0014f4a80})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320daa0, 0xc0014f4a80}, {0x320d9a0, 0xc0007207a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001a6c070?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3395
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3461 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0017f5f50, 0xc0017f5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x0?, 0xc0017f5f50, 0xc0017f5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0x9e92b6?, 0xc0008a8300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012c9fd0?, 0x593ba4?, 0xc001b09900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3405
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2195 [chan receive, 19 minutes]:
testing.(*T).Run(0xc000763860, {0x258f0dc?, 0x0?}, 0xc001ad2100)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000763860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000763860, 0xc001850580)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2194
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3311 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3310
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2568 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc001d07f50, 0xc000b61f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x0?, 0xc001d07f50, 0xc001d07f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xa74b25?, 0xc001e11500?, 0x3229e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2614
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3462 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3461
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3111 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0018503d0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000ba2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001850400)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001458e30, {0x320efe0, 0xc0018c2210}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001458e30, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3140
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3404 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3403
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3028 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3027
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 617 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00020b680, 0xc001b02380)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 315
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2567 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001896ad0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0017f9d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001896b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00096df00, {0x320efe0, 0xc00141cea0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00096df00, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2614
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3403 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3233318, 0xc0001177a0}, {0x32272b0, 0xc00096f640}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3233318?, 0xc00047e000?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3233318, 0xc00047e000}, 0xc0013fc820, {0xc00146c498, 0x16}, {0x25ae3cf, 0x14}, {0x25c17e0, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x3233318, 0xc00047e000}, 0xc0013fc820, {0xc00146c498, 0x16}, {0x25a1d81?, 0xc0012c6f60?}, {0x559033?, 0x4b162f?}, {0xc00145c000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x139
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0013fc820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0013fc820, 0xc0015e0100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2296
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3396 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7fbc10ba22c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001e119e0?, 0xc0016902d4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e119e0, {0xc0016902d4, 0x52c, 0x52c})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000720768, {0xc0016902d4?, 0x4917c0?, 0x229?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0014f4a50, {0x320d920, 0xc00009cd40})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320daa0, 0xc0014f4a50}, {0x320d920, 0xc00009cd40}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000720768?, {0x320daa0, 0xc0014f4a50})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000720768, {0x320daa0, 0xc0014f4a50})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320daa0, 0xc0014f4a50}, {0x320d9a0, 0xc000720768}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001b18480?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3395
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2945 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d1d9c0, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2940
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3349 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d1d480, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3305
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 406 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0012cb750, 0xc0014b1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x70?, 0xc0012cb750, 0xc0012cb798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc0004ecb60?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012cb7d0?, 0x593ba4?, 0xc00074fc70?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 426
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 817 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc00186fc20)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 815
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 3416 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7fbc10ba1b88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001c4b620?, 0xc001690aa1?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c4b620, {0xc001690aa1, 0x55f, 0x55f})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009cfb0, {0xc001690aa1?, 0x7fbc1036a9d8?, 0x230?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc000bdf560, {0x320d920, 0xc0008821f8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320daa0, 0xc000bdf560}, {0x320d920, 0xc0008821f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009cfb0?, {0x320daa0, 0xc000bdf560})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00009cfb0, {0x320daa0, 0xc000bdf560})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320daa0, 0xc000bdf560}, {0x320d9a0, 0xc00009cfb0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0015e0600?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3415
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 405 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008dcc10, 0x23)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000b5cd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008dcc40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005ffc60, {0x320efe0, 0xc00141d8c0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005ffc60, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 426
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 425 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 426 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008dcc40, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2614 [chan receive, 16 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001896b00, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2599
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 722 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e35980, 0xc00074fb90)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 673
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2569 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2568
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2770 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0018962d0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000ba3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001896300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018b8170, {0x320efe0, 0xc00145a3c0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018b8170, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2756
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 578 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a60600, 0xc001a6c3f0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 545
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3418 [select, 10 minutes]:
os/exec.(*Cmd).watchCtx(0xc00145c480, 0xc001847260)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3415
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 2772 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2771
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3101 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0018a05d0, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000837d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0018a0600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018ce040, {0x320efe0, 0xc001ac61b0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018ce040, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3267
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2429 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00014d900, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2427
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2198 [chan receive, 14 minutes]:
testing.(*T).Run(0xc000763d40, {0x258f0dc?, 0x0?}, 0xc00050e600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000763d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000763d40, 0xc001850640)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2194
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2973 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0004ec4e0, {0x259982f?, 0xc00009bd70?}, 0xc001b18480)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0004ec4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0004ec4e0, 0xc00050e600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2198
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 1762 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0017741a0, {0x258dd97?, 0x559033?}, 0x2f12e70)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0017741a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0017741a0, 0x2f12c78)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2197 [chan receive, 12 minutes]:
testing.(*T).Run(0xc000763ba0, {0x258f0dc?, 0x0?}, 0xc001b18080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000763ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000763ba0, 0xc001850600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2194
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3390 [syscall, 10 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x16, 0xc0014cdb30, 0x4, 0xc0016fccf0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc0016dc420?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001e34300)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc001e34300)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013fc680, 0xc001e34300)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3233318, 0xc0004ca150}, 0xc0013fc680, {0xc0001121a0, 0x1c}, {0x0?, 0xc001d04760?}, {0x559033?, 0x4b162f?}, {0xc0001d1600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0013fc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0013fc680, 0xc00162e280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3231
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2793 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2792
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3102 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0013c4f50, 0xc0013c4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x11?, 0xc0013c4f50, 0xc0013c4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc0004ecea0?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013c4fd0?, 0x593ba4?, 0xc001b18300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3267
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2755 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2754
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3267 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0018a0600, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3254
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2756 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001896300, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2754
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3139 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3087
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3112 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc000b9ef50, 0xc000b9ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x10?, 0xc000b9ef50, 0xc000b9ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc0013fc820?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012cc7d0?, 0x593ba4?, 0xc00176a1b0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3140
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3392 [IO wait]:
internal/poll.runtime_pollWait(0x7fbc10ba1c90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017f1200?, 0xc001688db5?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017f1200, {0xc001688db5, 0x324b, 0x324b})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000948190, {0xc001688db5?, 0x411b30?, 0x10000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00145b1a0, {0x320d920, 0xc000882228})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320daa0, 0xc00145b1a0}, {0x320d920, 0xc000882228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000948190?, {0x320daa0, 0xc00145b1a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000948190, {0x320daa0, 0xc00145b1a0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320daa0, 0xc00145b1a0}, {0x320d9a0, 0xc000948190}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001847490?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3390
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 2871 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2870
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2465 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00014d8d0, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001300d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00014d900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018b8ff0, {0x320efe0, 0xc001b95530}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018b8ff0, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2429
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2498 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0018a8f50, 0xc0018a8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0xf0?, 0xc0018a8f50, 0xc0018a8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc001e06340?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc0008a9200?, 0xc0014c43f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2429
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2200 [chan receive, 13 minutes]:
testing.(*T).Run(0xc0000fc340, {0x258f0dc?, 0x0?}, 0xc000728a80)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0000fc340, 0xc001850700)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2194
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2791 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000217dd0, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014b2d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000217e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b21ae0, {0x320efe0, 0xc001c66840}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b21ae0, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2872
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2613 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2599
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3348 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3305
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3393 [select, 10 minutes]:
os/exec.(*Cmd).watchCtx(0xc001e34300, 0xc001a6c850)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3390
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 3103 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3102
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2872 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000217e00, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2870
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2142 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000951a10, 0x13)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0017f6d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000951a40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b84080, {0x320efe0, 0xc0018c20c0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b84080, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3126 [chan receive, 10 minutes]:
testing.(*T).Run(0xc0013fc4e0, {0x259982f?, 0xc0012ca570?}, 0xc0015e0600)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0013fc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0013fc4e0, 0xc000728a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2200
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2144 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2143
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2428 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2427
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2499 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2498
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3460 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0008dce10, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000afd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008dce40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b20190, {0x320efe0, 0xc0014f40f0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b20190, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3405
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2356 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3309 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d1d450, 0x2)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc00083dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d1d480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00147f6d0, {0x320efe0, 0xc001cbb0b0}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00147f6d0, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3349
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3405 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008dce40, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3403
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3266 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3229e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3254
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3395 [syscall, 10 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x14, 0xc0014acb30, 0x4, 0xc0004c5680, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001b16498?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00165c480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc00165c480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0017749c0, 0xc00165c480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3233318, 0xc000163ab0}, 0xc0017749c0, {0xc0017e4720, 0x11}, {0x0?, 0xc001d05f60?}, {0x559033?, 0x4b162f?}, {0xc0017ec000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0017749c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0017749c0, 0xc001b18480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 2973
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3027 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0013c6f50, 0xc0013c6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x40?, 0xc0013c6f50, 0xc0013c6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0xc0016e8680?, 0x559940?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593b45?, 0xc001e35c80?, 0xc0014c4c40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2945
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3026 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc001d1d990, 0x12)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0014b0d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x324c920)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d1d9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b85420, {0x320efe0, 0xc001895560}, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b85420, 0x3b9aca00, 0x0, 0x1, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2945
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3310 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3233520, 0xc0000647e0}, 0xc0013d5750, 0xc0017f7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3233520, 0xc0000647e0}, 0x0?, 0xc0013d5750, 0xc0013d5798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3233520?, 0xc0000647e0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3349
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 3415 [syscall, 10 minutes]:
syscall.Syscall6(0xf7, 0x3, 0x15, 0xc0014afb30, 0x4, 0xc0013f8e10, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc0018b0060?)
	/usr/local/go/src/os/pidfd_linux.go:92 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00145c480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc00145c480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0004ecd00, 0xc00145c480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x3233318, 0xc000476d20}, 0xc0004ecd00, {0xc001afa360, 0x12}, {0x0?, 0xc0018a3f60?}, {0x559033?, 0x4b162f?}, {0xc001b1e000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0004ecd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0004ecd00, 0xc0015e0600)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 3126
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 3113 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3112
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3140 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001850400, 0xc0000647e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3087
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 3417 [IO wait]:
internal/poll.runtime_pollWait(0x7fbc10ba26e0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc001c4b6e0?, 0xc0012ed4c8?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001c4b6e0, {0xc0012ed4c8, 0x2b38, 0x2b38})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009d028, {0xc0012ed4c8?, 0x4?, 0xfe1f?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc000bdf590, {0x320d920, 0xc000948130})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320daa0, 0xc000bdf590}, {0x320d920, 0xc000948130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009d028?, {0x320daa0, 0xc000bdf590})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc00009d028, {0x320daa0, 0xc000bdf590})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320daa0, 0xc000bdf590}, {0x320d9a0, 0xc00009d028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001b18400?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3415
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 3391 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7fbc101a8c08, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017f1140?, 0xc0016912c9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017f1140, {0xc0016912c9, 0x537, 0x537})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000948178, {0xc0016912c9?, 0x4917c0?, 0x213?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00145b170, {0x320d920, 0xc000720928})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x320daa0, 0xc00145b170}, {0x320d920, 0xc000720928}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000948178?, {0x320daa0, 0xc00145b170})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000948178, {0x320daa0, 0xc00145b170})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x320daa0, 0xc00145b170}, {0x320d9a0, 0xc000948178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00162e280?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3390
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                    

Test pass (163/207)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 27.24
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 18.77
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.75
18 TestDownloadOnly/v1.31.1/DeleteAll 0.16
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 50.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 52.56
29 TestCertExpiration 315.39
31 TestForceSystemdFlag 94.31
32 TestForceSystemdEnv 67.92
34 TestKVMDriverInstallOrUpdate 6.77
38 TestErrorSpam/setup 42.67
39 TestErrorSpam/start 0.36
40 TestErrorSpam/status 0.72
41 TestErrorSpam/pause 1.56
42 TestErrorSpam/unpause 1.7
43 TestErrorSpam/stop 5.53
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 53.22
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 44
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.86
55 TestFunctional/serial/CacheCmd/cache/add_local 2.11
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.75
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.11
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
63 TestFunctional/serial/ExtraConfig 48.76
64 TestFunctional/serial/ComponentHealth 0.07
65 TestFunctional/serial/LogsCmd 1.39
66 TestFunctional/serial/LogsFileCmd 1.42
67 TestFunctional/serial/InvalidService 4.34
69 TestFunctional/parallel/ConfigCmd 0.29
70 TestFunctional/parallel/DashboardCmd 39.25
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.17
73 TestFunctional/parallel/StatusCmd 0.79
77 TestFunctional/parallel/ServiceCmdConnect 11.48
78 TestFunctional/parallel/AddonsCmd 0.16
79 TestFunctional/parallel/PersistentVolumeClaim 51.79
81 TestFunctional/parallel/SSHCmd 0.41
82 TestFunctional/parallel/CpCmd 1.51
83 TestFunctional/parallel/MySQL 32.94
84 TestFunctional/parallel/FileSync 0.2
85 TestFunctional/parallel/CertSync 1.33
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
93 TestFunctional/parallel/License 0.6
94 TestFunctional/parallel/Version/short 0.05
95 TestFunctional/parallel/Version/components 0.72
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
100 TestFunctional/parallel/ImageCommands/ImageBuild 8.08
101 TestFunctional/parallel/ImageCommands/Setup 1.75
102 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
103 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
104 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
105 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
106 TestFunctional/parallel/ProfileCmd/profile_list 0.37
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.35
109 TestFunctional/parallel/ServiceCmd/DeployApp 11.22
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
125 TestFunctional/parallel/MountCmd/any-port 15.42
126 TestFunctional/parallel/ServiceCmd/List 0.33
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
129 TestFunctional/parallel/ServiceCmd/Format 0.42
130 TestFunctional/parallel/ServiceCmd/URL 0.4
131 TestFunctional/parallel/MountCmd/specific-port 1.96
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
133 TestFunctional/delete_echo-server_images 0.04
134 TestFunctional/delete_my-image_image 0.02
135 TestFunctional/delete_minikube_cached_images 0.02
139 TestMultiControlPlane/serial/StartCluster 199.48
140 TestMultiControlPlane/serial/DeployApp 7.7
141 TestMultiControlPlane/serial/PingHostFromPods 1.19
142 TestMultiControlPlane/serial/AddWorkerNode 57.11
143 TestMultiControlPlane/serial/NodeLabels 0.07
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
145 TestMultiControlPlane/serial/CopyFile 13.11
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.06
151 TestMultiControlPlane/serial/DeleteSecondaryNode 17.15
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
154 TestMultiControlPlane/serial/RestartCluster 457.35
155 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
156 TestMultiControlPlane/serial/AddSecondaryNode 77.6
157 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
161 TestJSONOutput/start/Command 52.49
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.68
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.6
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 7.35
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.19
189 TestMainNoArgs 0.04
190 TestMinikubeProfile 89.24
193 TestMountStart/serial/StartWithMountFirst 27.59
194 TestMountStart/serial/VerifyMountFirst 0.37
195 TestMountStart/serial/StartWithMountSecond 24.21
196 TestMountStart/serial/VerifyMountSecond 0.36
197 TestMountStart/serial/DeleteFirst 0.87
198 TestMountStart/serial/VerifyMountPostDelete 0.36
199 TestMountStart/serial/Stop 1.27
200 TestMountStart/serial/RestartStopped 23.93
201 TestMountStart/serial/VerifyMountPostStop 0.36
204 TestMultiNode/serial/FreshStart2Nodes 106.67
205 TestMultiNode/serial/DeployApp2Nodes 6.22
206 TestMultiNode/serial/PingHostFrom2Pods 0.76
207 TestMultiNode/serial/AddNode 47.88
208 TestMultiNode/serial/MultiNodeLabels 0.06
209 TestMultiNode/serial/ProfileList 0.56
210 TestMultiNode/serial/CopyFile 7.01
211 TestMultiNode/serial/StopNode 2.21
212 TestMultiNode/serial/StartAfterStop 39.29
214 TestMultiNode/serial/DeleteNode 2.33
216 TestMultiNode/serial/RestartMultiNode 186.63
217 TestMultiNode/serial/ValidateNameConflict 43.4
224 TestScheduledStopUnix 110.96
228 TestRunningBinaryUpgrade 203.61
233 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
234 TestNoKubernetes/serial/StartWithK8s 117.37
235 TestNoKubernetes/serial/StartWithStopK8s 39.2
236 TestNoKubernetes/serial/Start 52.34
237 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
238 TestNoKubernetes/serial/ProfileList 1.91
239 TestNoKubernetes/serial/Stop 1.31
240 TestNoKubernetes/serial/StartNoArgs 24.42
241 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
242 TestStoppedBinaryUpgrade/Setup 2.31
243 TestStoppedBinaryUpgrade/Upgrade 107.51
263 TestPause/serial/Start 85.4
265 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
x
+
TestDownloadOnly/v1.20.0/json-events (27.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-728881 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-728881 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.240075501s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (27.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0927 16:56:19.059209   18368 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0927 16:56:19.059308   18368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-728881
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-728881: exit status 85 (56.341646ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-728881 | jenkins | v1.34.0 | 27 Sep 24 16:55 UTC |          |
	|         | -p download-only-728881        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 16:55:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 16:55:51.855467   18379 out.go:345] Setting OutFile to fd 1 ...
	I0927 16:55:51.855578   18379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:55:51.855588   18379 out.go:358] Setting ErrFile to fd 2...
	I0927 16:55:51.855592   18379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:55:51.855753   18379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	W0927 16:55:51.855856   18379 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19712-11184/.minikube/config/config.json: open /home/jenkins/minikube-integration/19712-11184/.minikube/config/config.json: no such file or directory
	I0927 16:55:51.856384   18379 out.go:352] Setting JSON to true
	I0927 16:55:51.857330   18379 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2297,"bootTime":1727453855,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 16:55:51.857415   18379 start.go:139] virtualization: kvm guest
	I0927 16:55:51.859773   18379 out.go:97] [download-only-728881] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 16:55:51.859900   18379 notify.go:220] Checking for updates...
	W0927 16:55:51.859905   18379 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 16:55:51.861725   18379 out.go:169] MINIKUBE_LOCATION=19712
	I0927 16:55:51.863184   18379 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 16:55:51.864688   18379 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 16:55:51.865839   18379 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 16:55:51.866969   18379 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 16:55:51.869364   18379 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 16:55:51.869616   18379 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 16:55:51.968812   18379 out.go:97] Using the kvm2 driver based on user configuration
	I0927 16:55:51.968842   18379 start.go:297] selected driver: kvm2
	I0927 16:55:51.968850   18379 start.go:901] validating driver "kvm2" against <nil>
	I0927 16:55:51.969303   18379 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:55:51.969460   18379 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 16:55:51.984899   18379 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 16:55:51.984940   18379 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 16:55:51.985466   18379 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0927 16:55:51.985613   18379 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 16:55:51.985647   18379 cni.go:84] Creating CNI manager for ""
	I0927 16:55:51.985690   18379 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 16:55:51.985700   18379 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 16:55:51.985742   18379 start.go:340] cluster config:
	{Name:download-only-728881 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-728881 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:55:51.985897   18379 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:55:51.987842   18379 out.go:97] Downloading VM boot image ...
	I0927 16:55:51.987882   18379 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I0927 16:56:05.818585   18379 out.go:97] Starting "download-only-728881" primary control-plane node in "download-only-728881" cluster
	I0927 16:56:05.818615   18379 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 16:56:05.918136   18379 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0927 16:56:05.918169   18379 cache.go:56] Caching tarball of preloaded images
	I0927 16:56:05.918354   18379 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0927 16:56:05.920135   18379 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0927 16:56:05.920159   18379 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0927 16:56:06.019312   18379 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-728881 host does not exist
	  To start a cluster, run: "minikube start -p download-only-728881"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-728881
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-184497 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-184497 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (18.768397325s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0927 16:56:38.148984   18368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0927 16:56:38.149027   18368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-184497
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-184497: exit status 85 (752.984107ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-728881 | jenkins | v1.34.0 | 27 Sep 24 16:55 UTC |                     |
	|         | -p download-only-728881        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:56 UTC |
	| delete  | -p download-only-728881        | download-only-728881 | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC | 27 Sep 24 16:56 UTC |
	| start   | -o=json --download-only        | download-only-184497 | jenkins | v1.34.0 | 27 Sep 24 16:56 UTC |                     |
	|         | -p download-only-184497        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 16:56:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 16:56:19.416945   18634 out.go:345] Setting OutFile to fd 1 ...
	I0927 16:56:19.417058   18634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:19.417068   18634 out.go:358] Setting ErrFile to fd 2...
	I0927 16:56:19.417072   18634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 16:56:19.417255   18634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 16:56:19.417833   18634 out.go:352] Setting JSON to true
	I0927 16:56:19.418585   18634 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2324,"bootTime":1727453855,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 16:56:19.418712   18634 start.go:139] virtualization: kvm guest
	I0927 16:56:19.420950   18634 out.go:97] [download-only-184497] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 16:56:19.421097   18634 notify.go:220] Checking for updates...
	I0927 16:56:19.422382   18634 out.go:169] MINIKUBE_LOCATION=19712
	I0927 16:56:19.423593   18634 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 16:56:19.424792   18634 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 16:56:19.426210   18634 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 16:56:19.427858   18634 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0927 16:56:19.430610   18634 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0927 16:56:19.430839   18634 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 16:56:19.463460   18634 out.go:97] Using the kvm2 driver based on user configuration
	I0927 16:56:19.463487   18634 start.go:297] selected driver: kvm2
	I0927 16:56:19.463492   18634 start.go:901] validating driver "kvm2" against <nil>
	I0927 16:56:19.463836   18634 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:56:19.463908   18634 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19712-11184/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0927 16:56:19.479419   18634 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0927 16:56:19.479499   18634 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 16:56:19.480072   18634 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0927 16:56:19.480237   18634 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0927 16:56:19.480264   18634 cni.go:84] Creating CNI manager for ""
	I0927 16:56:19.480313   18634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0927 16:56:19.480321   18634 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 16:56:19.480376   18634 start.go:340] cluster config:
	{Name:download-only-184497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-184497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 16:56:19.480463   18634 iso.go:125] acquiring lock: {Name:mkdd97d4af4b3791c7249f9e5fc51ee92321adcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0927 16:56:19.482206   18634 out.go:97] Starting "download-only-184497" primary control-plane node in "download-only-184497" cluster
	I0927 16:56:19.482231   18634 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 16:56:19.993453   18634 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0927 16:56:19.993531   18634 cache.go:56] Caching tarball of preloaded images
	I0927 16:56:19.993709   18634 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0927 16:56:19.995744   18634 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0927 16:56:19.995769   18634 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0927 16:56:20.093959   18634 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19712-11184/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-184497 host does not exist
	  To start a cluster, run: "minikube start -p download-only-184497"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-184497
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 16:56:39.444629   18368 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-233905 --alsologtostderr --binary-mirror http://127.0.0.1:36145 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-233905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-233905
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (50.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-610052 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-610052 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (49.56927934s)
helpers_test.go:175: Cleaning up "offline-crio-610052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-610052
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-610052: (1.131217943s)
--- PASS: TestOffline (50.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-511364
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-511364: exit status 85 (52.544089ms)

                                                
                                                
-- stdout --
	* Profile "addons-511364" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-511364"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-511364
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-511364: exit status 85 (49.974908ms)

                                                
                                                
-- stdout --
	* Profile "addons-511364" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-511364"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (52.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-301458 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-301458 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.976098007s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-301458 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-301458 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-301458 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-301458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-301458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-301458: (1.070274301s)
--- PASS: TestCertOptions (52.56s)

                                                
                                    
x
+
TestCertExpiration (315.39s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-784714 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-784714 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m31.013500113s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-784714 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-784714 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.20009988s)
helpers_test.go:175: Cleaning up "cert-expiration-784714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-784714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-784714: (1.175013093s)
--- PASS: TestCertExpiration (315.39s)

                                                
                                    
x
+
TestForceSystemdFlag (94.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-477115 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-477115 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m33.072941825s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-477115 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-477115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-477115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-477115: (1.053868789s)
--- PASS: TestForceSystemdFlag (94.31s)

                                                
                                    
x
+
TestForceSystemdEnv (67.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-682090 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-682090 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.940185948s)
helpers_test.go:175: Cleaning up "force-systemd-env-682090" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-682090
--- PASS: TestForceSystemdEnv (67.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.77s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0927 18:36:14.380535   18368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 18:36:14.380684   18368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0927 18:36:14.431838   18368 install.go:62] docker-machine-driver-kvm2: exit status 1
W0927 18:36:14.432209   18368 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 18:36:14.432274   18368 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3900550876/001/docker-machine-driver-kvm2
I0927 18:36:14.644253   18368 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3900550876/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc00076ee20 gz:0xc00076ee28 tar:0xc00076edd0 tar.bz2:0xc00076ede0 tar.gz:0xc00076edf0 tar.xz:0xc00076ee00 tar.zst:0xc00076ee10 tbz2:0xc00076ede0 tgz:0xc00076edf0 txz:0xc00076ee00 tzst:0xc00076ee10 xz:0xc00076ee30 zip:0xc00076ee40 zst:0xc00076ee38] Getters:map[file:0xc0018cf500 http:0xc000717040 https:0xc000717090] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 18:36:14.644333   18368 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3900550876/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (6.77s)

                                                
                                    
x
+
TestErrorSpam/setup (42.67s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-728779 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-728779 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-728779 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-728779 --driver=kvm2  --container-runtime=crio: (42.668304239s)
--- PASS: TestErrorSpam/setup (42.67s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 stop: (2.338023053s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 stop: (1.283351916s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-728779 --log_dir /tmp/nospam-728779 stop: (1.908922205s)
--- PASS: TestErrorSpam/stop (5.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19712-11184/.minikube/files/etc/test/nested/copy/18368/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-990577 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-990577 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (53.220917895s)
--- PASS: TestFunctional/serial/StartWithProxy (53.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 17:38:27.613320   18368 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-990577 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-990577 --alsologtostderr -v=8: (43.994843345s)
functional_test.go:663: soft start took 43.995661575s for "functional-990577" cluster.
I0927 17:39:11.608522   18368 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (44.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-990577 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 cache add registry.k8s.io/pause:3.1: (1.245878581s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 cache add registry.k8s.io/pause:3.3: (1.401765436s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 cache add registry.k8s.io/pause:latest: (1.214517053s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-990577 /tmp/TestFunctionalserialCacheCmdcacheadd_local626437249/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cache add minikube-local-cache-test:functional-990577
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 cache add minikube-local-cache-test:functional-990577: (1.784700109s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cache delete minikube-local-cache-test:functional-990577
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-990577
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.553828ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 cache reload: (1.054734533s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 kubectl -- --context functional-990577 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-990577 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-990577 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-990577 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.756791364s)
functional_test.go:761: restart took 48.756925689s for "functional-990577" cluster.
I0927 17:40:08.822458   18368 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (48.76s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-990577 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 logs: (1.390559785s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 logs --file /tmp/TestFunctionalserialLogsFileCmd554008918/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 logs --file /tmp/TestFunctionalserialLogsFileCmd554008918/001/logs.txt: (1.421586594s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-990577 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-990577
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-990577: exit status 115 (272.027743ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.66:31616 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-990577 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 config get cpus: exit status 14 (45.123174ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 config get cpus: exit status 14 (45.905953ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (39.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-990577 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-990577 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31882: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (39.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-990577 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-990577 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (135.651293ms)

                                                
                                                
-- stdout --
	* [functional-990577] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:40:28.329853   31509 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:40:28.330106   31509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:40:28.330114   31509 out.go:358] Setting ErrFile to fd 2...
	I0927 17:40:28.330119   31509 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:40:28.330276   31509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:40:28.330814   31509 out.go:352] Setting JSON to false
	I0927 17:40:28.331656   31509 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4973,"bootTime":1727453855,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:40:28.331745   31509 start.go:139] virtualization: kvm guest
	I0927 17:40:28.333978   31509 out.go:177] * [functional-990577] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 17:40:28.335906   31509 notify.go:220] Checking for updates...
	I0927 17:40:28.335940   31509 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:40:28.337473   31509 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:40:28.339305   31509 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:40:28.340545   31509 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:40:28.341793   31509 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:40:28.343003   31509 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:40:28.344512   31509 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:40:28.344890   31509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:40:28.344948   31509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:40:28.361683   31509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0927 17:40:28.362094   31509 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:40:28.362603   31509 main.go:141] libmachine: Using API Version  1
	I0927 17:40:28.362627   31509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:40:28.362951   31509 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:40:28.363143   31509 main.go:141] libmachine: (functional-990577) Calling .DriverName
	I0927 17:40:28.363407   31509 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:40:28.363817   31509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:40:28.363857   31509 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:40:28.378607   31509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
	I0927 17:40:28.379052   31509 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:40:28.379536   31509 main.go:141] libmachine: Using API Version  1
	I0927 17:40:28.379559   31509 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:40:28.379879   31509 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:40:28.380044   31509 main.go:141] libmachine: (functional-990577) Calling .DriverName
	I0927 17:40:28.415432   31509 out.go:177] * Using the kvm2 driver based on existing profile
	I0927 17:40:28.417097   31509 start.go:297] selected driver: kvm2
	I0927 17:40:28.417115   31509 start.go:901] validating driver "kvm2" against &{Name:functional-990577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-990577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:40:28.417214   31509 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:40:28.419464   31509 out.go:201] 
	W0927 17:40:28.421017   31509 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 17:40:28.422531   31509 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-990577 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-990577 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-990577 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (166.069827ms)

                                                
                                                
-- stdout --
	* [functional-990577] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 17:40:18.096882   30890 out.go:345] Setting OutFile to fd 1 ...
	I0927 17:40:18.096994   30890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:40:18.097008   30890 out.go:358] Setting ErrFile to fd 2...
	I0927 17:40:18.097015   30890 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 17:40:18.097265   30890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 17:40:18.097882   30890 out.go:352] Setting JSON to false
	I0927 17:40:18.098876   30890 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4963,"bootTime":1727453855,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 17:40:18.098973   30890 start.go:139] virtualization: kvm guest
	I0927 17:40:18.100680   30890 out.go:177] * [functional-990577] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0927 17:40:18.102898   30890 out.go:177]   - MINIKUBE_LOCATION=19712
	I0927 17:40:18.102968   30890 notify.go:220] Checking for updates...
	I0927 17:40:18.106425   30890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 17:40:18.107534   30890 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	I0927 17:40:18.108866   30890 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	I0927 17:40:18.110472   30890 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 17:40:18.111841   30890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 17:40:18.113603   30890 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 17:40:18.114220   30890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:40:18.114315   30890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:40:18.135359   30890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I0927 17:40:18.135892   30890 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:40:18.136596   30890 main.go:141] libmachine: Using API Version  1
	I0927 17:40:18.136622   30890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:40:18.137107   30890 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:40:18.137412   30890 main.go:141] libmachine: (functional-990577) Calling .DriverName
	I0927 17:40:18.137693   30890 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 17:40:18.138007   30890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 17:40:18.138064   30890 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 17:40:18.158486   30890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0927 17:40:18.159029   30890 main.go:141] libmachine: () Calling .GetVersion
	I0927 17:40:18.159639   30890 main.go:141] libmachine: Using API Version  1
	I0927 17:40:18.159667   30890 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 17:40:18.160079   30890 main.go:141] libmachine: () Calling .GetMachineName
	I0927 17:40:18.160277   30890 main.go:141] libmachine: (functional-990577) Calling .DriverName
	I0927 17:40:18.201142   30890 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0927 17:40:18.202272   30890 start.go:297] selected driver: kvm2
	I0927 17:40:18.202291   30890 start.go:901] validating driver "kvm2" against &{Name:functional-990577 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-990577 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.66 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 17:40:18.202427   30890 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 17:40:18.204433   30890 out.go:201] 
	W0927 17:40:18.205726   30890 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 17:40:18.206774   30890 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-990577 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-990577 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-b8bwp" [e0e6acd7-0536-475a-99a6-1da28d9fe7eb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-b8bwp" [e0e6acd7-0536-475a-99a6-1da28d9fe7eb] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004403815s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.66:31306
functional_test.go:1675: http://192.168.39.66:31306: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-b8bwp

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.66:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.66:31306
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.48s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cd47d83b-3768-427d-b51f-e9b6f6f912b1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00542481s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-990577 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-990577 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-990577 get pvc myclaim -o=json
I0927 17:40:24.477245   18368 retry.go:31] will retry after 2.046458739s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:5d98c36a-87f3-4a2b-a67f-8d16207d1199 ResourceVersion:694 Generation:0 CreationTimestamp:2024-09-27 17:40:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b5e0a0 VolumeMode:0xc001b5e0b0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-990577 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-990577 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4b7fa41f-db44-4125-8657-4582124f58f1] Pending
helpers_test.go:344: "sp-pod" [4b7fa41f-db44-4125-8657-4582124f58f1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4b7fa41f-db44-4125-8657-4582124f58f1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004139909s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-990577 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-990577 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-990577 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b5bb0b3c-2790-4ef5-a5a5-86b338ff45e8] Pending
helpers_test.go:344: "sp-pod" [b5bb0b3c-2790-4ef5-a5a5-86b338ff45e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b5bb0b3c-2790-4ef5-a5a5-86b338ff45e8] Running
2024/09/27 17:41:08 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.004070638s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-990577 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.79s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh -n functional-990577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cp functional-990577:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3389166460/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh -n functional-990577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh -n functional-990577 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-990577 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-jwh4x" [6a900f0d-2b82-4954-be3f-12f92a273b92] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-jwh4x" [6a900f0d-2b82-4954-be3f-12f92a273b92] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.003832049s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-990577 exec mysql-6cdb49bbb-jwh4x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-990577 exec mysql-6cdb49bbb-jwh4x -- mysql -ppassword -e "show databases;": exit status 1 (132.788738ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 17:41:01.298009   18368 retry.go:31] will retry after 883.305469ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-990577 exec mysql-6cdb49bbb-jwh4x -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-990577 exec mysql-6cdb49bbb-jwh4x -- mysql -ppassword -e "show databases;": exit status 1 (148.845852ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 17:41:02.331069   18368 retry.go:31] will retry after 1.459230727s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-990577 exec mysql-6cdb49bbb-jwh4x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/18368/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /etc/test/nested/copy/18368/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/18368.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /etc/ssl/certs/18368.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/18368.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /usr/share/ca-certificates/18368.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/183682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /etc/ssl/certs/183682.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/183682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /usr/share/ca-certificates/183682.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-990577 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh "sudo systemctl is-active docker": exit status 1 (206.673639ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh "sudo systemctl is-active containerd": exit status 1 (213.579978ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-990577 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-990577
localhost/kicbase/echo-server:functional-990577
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-990577 image ls --format short --alsologtostderr:
I0927 17:40:47.494622   32672 out.go:345] Setting OutFile to fd 1 ...
I0927 17:40:47.494914   32672 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:47.494925   32672 out.go:358] Setting ErrFile to fd 2...
I0927 17:40:47.494932   32672 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:47.495117   32672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
I0927 17:40:47.495774   32672 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:47.495921   32672 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:47.496318   32672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:47.496378   32672 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:47.511642   32672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
I0927 17:40:47.512203   32672 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:47.512838   32672 main.go:141] libmachine: Using API Version  1
I0927 17:40:47.512866   32672 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:47.513183   32672 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:47.513426   32672 main.go:141] libmachine: (functional-990577) Calling .GetState
I0927 17:40:47.515364   32672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:47.515408   32672 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:47.531115   32672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
I0927 17:40:47.531689   32672 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:47.532173   32672 main.go:141] libmachine: Using API Version  1
I0927 17:40:47.532196   32672 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:47.532517   32672 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:47.532693   32672 main.go:141] libmachine: (functional-990577) Calling .DriverName
I0927 17:40:47.532866   32672 ssh_runner.go:195] Run: systemctl --version
I0927 17:40:47.532893   32672 main.go:141] libmachine: (functional-990577) Calling .GetSSHHostname
I0927 17:40:47.535684   32672 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:47.536154   32672 main.go:141] libmachine: (functional-990577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7c:19", ip: ""} in network mk-functional-990577: {Iface:virbr1 ExpiryTime:2024-09-27 18:37:48 +0000 UTC Type:0 Mac:52:54:00:d9:7c:19 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-990577 Clientid:01:52:54:00:d9:7c:19}
I0927 17:40:47.536180   32672 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined IP address 192.168.39.66 and MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:47.536394   32672 main.go:141] libmachine: (functional-990577) Calling .GetSSHPort
I0927 17:40:47.536619   32672 main.go:141] libmachine: (functional-990577) Calling .GetSSHKeyPath
I0927 17:40:47.536763   32672 main.go:141] libmachine: (functional-990577) Calling .GetSSHUsername
I0927 17:40:47.536883   32672 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/functional-990577/id_rsa Username:docker}
I0927 17:40:47.693617   32672 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 17:40:47.793314   32672 main.go:141] libmachine: Making call to close driver server
I0927 17:40:47.793329   32672 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:47.793742   32672 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:47.793762   32672 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:47.793771   32672 main.go:141] libmachine: Making call to close driver server
I0927 17:40:47.793780   32672 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:47.794104   32672 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:47.794129   32672 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:47.794134   32672 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-990577 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 9527c0f683c3b | 192MB  |
| localhost/kicbase/echo-server           | functional-990577  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-990577  | b3225f7855f8e | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-990577  | 410a0d49a5d74 | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-990577 image ls --format table --alsologtostderr:
I0927 17:40:56.481901   32872 out.go:345] Setting OutFile to fd 1 ...
I0927 17:40:56.482021   32872 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:56.482032   32872 out.go:358] Setting ErrFile to fd 2...
I0927 17:40:56.482039   32872 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:56.482309   32872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
I0927 17:40:56.483219   32872 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:56.483385   32872 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:56.483794   32872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:56.483856   32872 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:56.499202   32872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
I0927 17:40:56.499748   32872 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:56.500281   32872 main.go:141] libmachine: Using API Version  1
I0927 17:40:56.500303   32872 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:56.500686   32872 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:56.500882   32872 main.go:141] libmachine: (functional-990577) Calling .GetState
I0927 17:40:56.503096   32872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:56.503155   32872 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:56.518935   32872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
I0927 17:40:56.519371   32872 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:56.520047   32872 main.go:141] libmachine: Using API Version  1
I0927 17:40:56.520076   32872 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:56.520547   32872 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:56.520815   32872 main.go:141] libmachine: (functional-990577) Calling .DriverName
I0927 17:40:56.521074   32872 ssh_runner.go:195] Run: systemctl --version
I0927 17:40:56.521100   32872 main.go:141] libmachine: (functional-990577) Calling .GetSSHHostname
I0927 17:40:56.524756   32872 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:56.525130   32872 main.go:141] libmachine: (functional-990577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7c:19", ip: ""} in network mk-functional-990577: {Iface:virbr1 ExpiryTime:2024-09-27 18:37:48 +0000 UTC Type:0 Mac:52:54:00:d9:7c:19 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-990577 Clientid:01:52:54:00:d9:7c:19}
I0927 17:40:56.525157   32872 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined IP address 192.168.39.66 and MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:56.525427   32872 main.go:141] libmachine: (functional-990577) Calling .GetSSHPort
I0927 17:40:56.525635   32872 main.go:141] libmachine: (functional-990577) Calling .GetSSHKeyPath
I0927 17:40:56.525798   32872 main.go:141] libmachine: (functional-990577) Calling .GetSSHUsername
I0927 17:40:56.525986   32872 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/functional-990577/id_rsa Username:docker}
I0927 17:40:56.625273   32872 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 17:40:56.692740   32872 main.go:141] libmachine: Making call to close driver server
I0927 17:40:56.692763   32872 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:56.693042   32872 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:56.693063   32872 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:56.693079   32872 main.go:141] libmachine: Making call to close driver server
I0927 17:40:56.693087   32872 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:56.693087   32872 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
I0927 17:40:56.693318   32872 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:56.693335   32872 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-990577 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"12968670680f4561ef681878239
1eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"da53a65699918dc3e55fda80dc09432300725cb982b5898eb767082509cf83a5","repoDigests":["docker.io/library/9ae11a46cbadfe93bd48c7aae75e7421ebee8d3200e98308df173bea8b101861-tmp@sha256:f13bfeef8899844775e18e24e6c4bccac292a769b9c6e522a8353909f0be5f2e"],"repoTags":[],"size":"1466018"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"115053965e86b2df4d78af78d7951b8644839d
20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"410a0d49a5d744dbae2739dc2187eebdf2f685a100d3ab840649c04257850574","repoDigests":["localhost/minikube-local-cache-test@sha256:f6c92afbc3749c08f8c450f9058a9f72a354fadacbf50e87f04de8540ab635ac"],"repoTags":["localhost/minikube-local-cache-test:functional-990577"],"size":"3330"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-990577"],"size":"4943877"},{"id":"b3225f7855f8e15b3b88b8129c0c9a0cbef7c910d8b8d2aae4afa862a8000e92","repoDigests":["localhost/my-image@sha256:1d48f8009158a0964c64f3ffbf3abfd0f23a0102c9909cf98
0c15360292c2a13"],"repoTags":["localhost/my-image:functional-990577"],"size":"1468600"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","do
cker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5
a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbc
c5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd","repoDigests":["docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b","docker.io/library/nginx@sha256:640ac1e1ca185051544c12ed0c32c3f0be5d35737482a323af1d3fa5f12574d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853881"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest
"],"size":"1462480"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-990577 image ls --format json --alsologtostderr:
I0927 17:40:56.207970   32847 out.go:345] Setting OutFile to fd 1 ...
I0927 17:40:56.208070   32847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:56.208078   32847 out.go:358] Setting ErrFile to fd 2...
I0927 17:40:56.208082   32847 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:56.208272   32847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
I0927 17:40:56.208853   32847 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:56.208954   32847 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:56.209322   32847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:56.209366   32847 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:56.225067   32847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
I0927 17:40:56.225577   32847 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:56.226162   32847 main.go:141] libmachine: Using API Version  1
I0927 17:40:56.226186   32847 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:56.226626   32847 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:56.226885   32847 main.go:141] libmachine: (functional-990577) Calling .GetState
I0927 17:40:56.228787   32847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:56.228840   32847 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:56.244371   32847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
I0927 17:40:56.244808   32847 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:56.245368   32847 main.go:141] libmachine: Using API Version  1
I0927 17:40:56.245396   32847 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:56.245782   32847 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:56.246028   32847 main.go:141] libmachine: (functional-990577) Calling .DriverName
I0927 17:40:56.246221   32847 ssh_runner.go:195] Run: systemctl --version
I0927 17:40:56.246243   32847 main.go:141] libmachine: (functional-990577) Calling .GetSSHHostname
I0927 17:40:56.250133   32847 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:56.250869   32847 main.go:141] libmachine: (functional-990577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7c:19", ip: ""} in network mk-functional-990577: {Iface:virbr1 ExpiryTime:2024-09-27 18:37:48 +0000 UTC Type:0 Mac:52:54:00:d9:7c:19 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-990577 Clientid:01:52:54:00:d9:7c:19}
I0927 17:40:56.250925   32847 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined IP address 192.168.39.66 and MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:56.251141   32847 main.go:141] libmachine: (functional-990577) Calling .GetSSHPort
I0927 17:40:56.251395   32847 main.go:141] libmachine: (functional-990577) Calling .GetSSHKeyPath
I0927 17:40:56.251585   32847 main.go:141] libmachine: (functional-990577) Calling .GetSSHUsername
I0927 17:40:56.251761   32847 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/functional-990577/id_rsa Username:docker}
I0927 17:40:56.360666   32847 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 17:40:56.431830   32847 main.go:141] libmachine: Making call to close driver server
I0927 17:40:56.431846   32847 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:56.432115   32847 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:56.432132   32847 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:56.432158   32847 main.go:141] libmachine: Making call to close driver server
I0927 17:40:56.432161   32847 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
I0927 17:40:56.432167   32847 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:56.432450   32847 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:56.432541   32847 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:56.432508   32847 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-990577 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 9527c0f683c3b2f0465019f9f5456f01a0fc0d4d274466831b9910a21d0302cd
repoDigests:
- docker.io/library/nginx@sha256:10b61fc3d8262c8bf44c89aef3d81202ce12b8cba12fff2e32ca5978a2d88c2b
- docker.io/library/nginx@sha256:640ac1e1ca185051544c12ed0c32c3f0be5d35737482a323af1d3fa5f12574d6
repoTags:
- docker.io/library/nginx:latest
size: "191853881"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-990577
size: "4943877"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 410a0d49a5d744dbae2739dc2187eebdf2f685a100d3ab840649c04257850574
repoDigests:
- localhost/minikube-local-cache-test@sha256:f6c92afbc3749c08f8c450f9058a9f72a354fadacbf50e87f04de8540ab635ac
repoTags:
- localhost/minikube-local-cache-test:functional-990577
size: "3330"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-990577 image ls --format yaml --alsologtostderr:
I0927 17:40:47.848979   32697 out.go:345] Setting OutFile to fd 1 ...
I0927 17:40:47.849095   32697 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:47.849104   32697 out.go:358] Setting ErrFile to fd 2...
I0927 17:40:47.849109   32697 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:47.849315   32697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
I0927 17:40:47.849935   32697 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:47.850042   32697 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:47.850399   32697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:47.850448   32697 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:47.866449   32697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46123
I0927 17:40:47.867107   32697 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:47.867820   32697 main.go:141] libmachine: Using API Version  1
I0927 17:40:47.867846   32697 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:47.868395   32697 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:47.868762   32697 main.go:141] libmachine: (functional-990577) Calling .GetState
I0927 17:40:47.871880   32697 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:47.871951   32697 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:47.889045   32697 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35777
I0927 17:40:47.889664   32697 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:47.890383   32697 main.go:141] libmachine: Using API Version  1
I0927 17:40:47.890416   32697 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:47.890912   32697 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:47.891186   32697 main.go:141] libmachine: (functional-990577) Calling .DriverName
I0927 17:40:47.891510   32697 ssh_runner.go:195] Run: systemctl --version
I0927 17:40:47.891543   32697 main.go:141] libmachine: (functional-990577) Calling .GetSSHHostname
I0927 17:40:47.895652   32697 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:47.896121   32697 main.go:141] libmachine: (functional-990577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7c:19", ip: ""} in network mk-functional-990577: {Iface:virbr1 ExpiryTime:2024-09-27 18:37:48 +0000 UTC Type:0 Mac:52:54:00:d9:7c:19 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-990577 Clientid:01:52:54:00:d9:7c:19}
I0927 17:40:47.896149   32697 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined IP address 192.168.39.66 and MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:47.896417   32697 main.go:141] libmachine: (functional-990577) Calling .GetSSHPort
I0927 17:40:47.896687   32697 main.go:141] libmachine: (functional-990577) Calling .GetSSHKeyPath
I0927 17:40:47.896871   32697 main.go:141] libmachine: (functional-990577) Calling .GetSSHUsername
I0927 17:40:47.896996   32697 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/functional-990577/id_rsa Username:docker}
I0927 17:40:48.028886   32697 ssh_runner.go:195] Run: sudo crictl images --output json
I0927 17:40:48.075239   32697 main.go:141] libmachine: Making call to close driver server
I0927 17:40:48.075255   32697 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:48.075588   32697 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:48.075626   32697 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:48.075642   32697 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
I0927 17:40:48.075662   32697 main.go:141] libmachine: Making call to close driver server
I0927 17:40:48.075671   32697 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:48.075903   32697 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:48.075918   32697 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:48.075927   32697 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (8.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh pgrep buildkitd: exit status 1 (210.443427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image build -t localhost/my-image:functional-990577 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 image build -t localhost/my-image:functional-990577 testdata/build --alsologtostderr: (7.594778795s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-990577 image build -t localhost/my-image:functional-990577 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> da53a656999
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-990577
--> b3225f7855f
Successfully tagged localhost/my-image:functional-990577
b3225f7855f8e15b3b88b8129c0c9a0cbef7c910d8b8d2aae4afa862a8000e92
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-990577 image build -t localhost/my-image:functional-990577 testdata/build --alsologtostderr:
I0927 17:40:48.348867   32754 out.go:345] Setting OutFile to fd 1 ...
I0927 17:40:48.349059   32754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:48.349072   32754 out.go:358] Setting ErrFile to fd 2...
I0927 17:40:48.349093   32754 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0927 17:40:48.349440   32754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
I0927 17:40:48.350322   32754 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:48.351003   32754 config.go:182] Loaded profile config "functional-990577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0927 17:40:48.351386   32754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:48.351470   32754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:48.368441   32754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32851
I0927 17:40:48.369025   32754 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:48.369809   32754 main.go:141] libmachine: Using API Version  1
I0927 17:40:48.369857   32754 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:48.370285   32754 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:48.370526   32754 main.go:141] libmachine: (functional-990577) Calling .GetState
I0927 17:40:48.372773   32754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0927 17:40:48.372827   32754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0927 17:40:48.388157   32754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
I0927 17:40:48.388639   32754 main.go:141] libmachine: () Calling .GetVersion
I0927 17:40:48.389187   32754 main.go:141] libmachine: Using API Version  1
I0927 17:40:48.389210   32754 main.go:141] libmachine: () Calling .SetConfigRaw
I0927 17:40:48.389598   32754 main.go:141] libmachine: () Calling .GetMachineName
I0927 17:40:48.389780   32754 main.go:141] libmachine: (functional-990577) Calling .DriverName
I0927 17:40:48.389983   32754 ssh_runner.go:195] Run: systemctl --version
I0927 17:40:48.390020   32754 main.go:141] libmachine: (functional-990577) Calling .GetSSHHostname
I0927 17:40:48.393098   32754 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:48.393527   32754 main.go:141] libmachine: (functional-990577) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:7c:19", ip: ""} in network mk-functional-990577: {Iface:virbr1 ExpiryTime:2024-09-27 18:37:48 +0000 UTC Type:0 Mac:52:54:00:d9:7c:19 Iaid: IPaddr:192.168.39.66 Prefix:24 Hostname:functional-990577 Clientid:01:52:54:00:d9:7c:19}
I0927 17:40:48.393554   32754 main.go:141] libmachine: (functional-990577) DBG | domain functional-990577 has defined IP address 192.168.39.66 and MAC address 52:54:00:d9:7c:19 in network mk-functional-990577
I0927 17:40:48.393736   32754 main.go:141] libmachine: (functional-990577) Calling .GetSSHPort
I0927 17:40:48.393892   32754 main.go:141] libmachine: (functional-990577) Calling .GetSSHKeyPath
I0927 17:40:48.394035   32754 main.go:141] libmachine: (functional-990577) Calling .GetSSHUsername
I0927 17:40:48.394143   32754 sshutil.go:53] new ssh client: &{IP:192.168.39.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/functional-990577/id_rsa Username:docker}
I0927 17:40:48.506937   32754 build_images.go:161] Building image from path: /tmp/build.194187243.tar
I0927 17:40:48.507030   32754 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0927 17:40:48.520210   32754 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.194187243.tar
I0927 17:40:48.526557   32754 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.194187243.tar: stat -c "%s %y" /var/lib/minikube/build/build.194187243.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.194187243.tar': No such file or directory
I0927 17:40:48.526593   32754 ssh_runner.go:362] scp /tmp/build.194187243.tar --> /var/lib/minikube/build/build.194187243.tar (3072 bytes)
I0927 17:40:48.555593   32754 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.194187243
I0927 17:40:48.567239   32754 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.194187243 -xf /var/lib/minikube/build/build.194187243.tar
I0927 17:40:48.578208   32754 crio.go:315] Building image: /var/lib/minikube/build/build.194187243
I0927 17:40:48.578272   32754 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-990577 /var/lib/minikube/build/build.194187243 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0927 17:40:55.851784   32754 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-990577 /var/lib/minikube/build/build.194187243 --cgroup-manager=cgroupfs: (7.273491856s)
I0927 17:40:55.851853   32754 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.194187243
I0927 17:40:55.865824   32754 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.194187243.tar
I0927 17:40:55.881839   32754 build_images.go:217] Built localhost/my-image:functional-990577 from /tmp/build.194187243.tar
I0927 17:40:55.881890   32754 build_images.go:133] succeeded building to: functional-990577
I0927 17:40:55.881895   32754 build_images.go:134] failed building to: 
I0927 17:40:55.881919   32754 main.go:141] libmachine: Making call to close driver server
I0927 17:40:55.881926   32754 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:55.882290   32754 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:55.882300   32754 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
I0927 17:40:55.882319   32754 main.go:141] libmachine: Making call to close connection to plugin binary
I0927 17:40:55.882329   32754 main.go:141] libmachine: Making call to close driver server
I0927 17:40:55.882337   32754 main.go:141] libmachine: (functional-990577) Calling .Close
I0927 17:40:55.882608   32754 main.go:141] libmachine: (functional-990577) DBG | Closing plugin on server side
I0927 17:40:55.882711   32754 main.go:141] libmachine: Successfully made call to close driver server
I0927 17:40:55.882757   32754 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (8.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.725008671s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-990577
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "314.545218ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.200241ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "285.782965ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "50.192713ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image load --daemon kicbase/echo-server:functional-990577 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-990577 image load --daemon kicbase/echo-server:functional-990577 --alsologtostderr: (3.063602413s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-990577 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-990577 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-r5snc" [f86fa9de-3985-4da0-98cf-120859e2a4c2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-r5snc" [f86fa9de-3985-4da0-98cf-120859e2a4c2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004454064s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image load --daemon kicbase/echo-server:functional-990577 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-990577
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image load --daemon kicbase/echo-server:functional-990577 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image save kicbase/echo-server:functional-990577 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image rm kicbase/echo-server:functional-990577 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-990577
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 image save --daemon kicbase/echo-server:functional-990577 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-990577
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (15.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdany-port2112852860/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727458826100648865" to /tmp/TestFunctionalparallelMountCmdany-port2112852860/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727458826100648865" to /tmp/TestFunctionalparallelMountCmdany-port2112852860/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727458826100648865" to /tmp/TestFunctionalparallelMountCmdany-port2112852860/001/test-1727458826100648865
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.562131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:40:26.300499   18368 retry.go:31] will retry after 386.521429ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 27 17:40 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 27 17:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 27 17:40 test-1727458826100648865
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh cat /mount-9p/test-1727458826100648865
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-990577 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [01ad7ef3-1ca8-423f-aa16-ff93f1b32e71] Pending
helpers_test.go:344: "busybox-mount" [01ad7ef3-1ca8-423f-aa16-ff93f1b32e71] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [01ad7ef3-1ca8-423f-aa16-ff93f1b32e71] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [01ad7ef3-1ca8-423f-aa16-ff93f1b32e71] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.004697917s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-990577 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdany-port2112852860/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (15.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 service list -o json
functional_test.go:1494: Took "303.557605ms" to run "out/minikube-linux-amd64 -p functional-990577 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.66:32735
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.66:32735
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdspecific-port3200194803/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.558198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:40:41.727146   18368 retry.go:31] will retry after 711.878972ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdspecific-port3200194803/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh "sudo umount -f /mount-9p": exit status 1 (216.753908ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-990577 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdspecific-port3200194803/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3099797924/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3099797924/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3099797924/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T" /mount1: exit status 1 (269.777953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0927 17:40:43.748413   18368 retry.go:31] will retry after 373.66403ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-990577 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-990577 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3099797924/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3099797924/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-990577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3099797924/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-990577
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-990577
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-990577
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-748477 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-748477 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.815408853s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-748477 -- rollout status deployment/busybox: (5.496940717s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-j7gsn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-p8fcc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-xmqtg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-j7gsn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-p8fcc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-xmqtg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-j7gsn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-p8fcc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-xmqtg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-j7gsn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-j7gsn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-p8fcc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-p8fcc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-xmqtg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-748477 -- exec busybox-7dff88458-xmqtg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-748477 -v=7 --alsologtostderr
E0927 17:45:16.997550   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.003967   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.015373   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.036819   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.078503   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.159970   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.321548   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:17.643831   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:18.286166   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:19.567754   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:22.129481   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 17:45:27.251400   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-748477 -v=7 --alsologtostderr: (56.254252429s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-748477 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0927 17:45:37.493345   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp testdata/cp-test.txt ha-748477:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477:/home/docker/cp-test.txt ha-748477-m02:/home/docker/cp-test_ha-748477_ha-748477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test_ha-748477_ha-748477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477:/home/docker/cp-test.txt ha-748477-m03:/home/docker/cp-test_ha-748477_ha-748477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test_ha-748477_ha-748477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477:/home/docker/cp-test.txt ha-748477-m04:/home/docker/cp-test_ha-748477_ha-748477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test_ha-748477_ha-748477-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp testdata/cp-test.txt ha-748477-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m02:/home/docker/cp-test.txt ha-748477:/home/docker/cp-test_ha-748477-m02_ha-748477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test_ha-748477-m02_ha-748477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m02:/home/docker/cp-test.txt ha-748477-m03:/home/docker/cp-test_ha-748477-m02_ha-748477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test_ha-748477-m02_ha-748477-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m02:/home/docker/cp-test.txt ha-748477-m04:/home/docker/cp-test_ha-748477-m02_ha-748477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test_ha-748477-m02_ha-748477-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp testdata/cp-test.txt ha-748477-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt ha-748477:/home/docker/cp-test_ha-748477-m03_ha-748477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test_ha-748477-m03_ha-748477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt ha-748477-m02:/home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test_ha-748477-m03_ha-748477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m03:/home/docker/cp-test.txt ha-748477-m04:/home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test_ha-748477-m03_ha-748477-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp testdata/cp-test.txt ha-748477-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1837801640/001/cp-test_ha-748477-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt ha-748477:/home/docker/cp-test_ha-748477-m04_ha-748477.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477 "sudo cat /home/docker/cp-test_ha-748477-m04_ha-748477.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt ha-748477-m02:/home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m02 "sudo cat /home/docker/cp-test_ha-748477-m04_ha-748477-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 cp ha-748477-m04:/home/docker/cp-test.txt ha-748477-m03:/home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 ssh -n ha-748477-m03 "sudo cat /home/docker/cp-test_ha-748477-m04_ha-748477-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.063599199s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 node delete m03 -v=7 --alsologtostderr
E0927 17:55:16.997440   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-748477 node delete m03 -v=7 --alsologtostderr: (16.368043182s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (457.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-748477 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0927 18:00:17.001130   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:01:40.064023   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-748477 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m36.606718695s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
E0927 18:05:16.997182   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (457.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-748477 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-748477 --control-plane -v=7 --alsologtostderr: (1m16.781868735s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-748477 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-404614 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-404614 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (52.485125565s)
--- PASS: TestJSONOutput/start/Command (52.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-404614 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-404614 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-404614 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-404614 --output=json --user=testUser: (7.346910826s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-946964 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-946964 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.082944ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ca59b841-e130-4e77-b907-bfce90b038b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-946964] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9429096-842a-4489-8c97-1993b8e1b927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19712"}}
	{"specversion":"1.0","id":"f70b13af-6e10-4736-b8c8-76aead5319cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b15c2d7-2f18-4711-ba13-df850b54e59b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig"}}
	{"specversion":"1.0","id":"cba75f62-6466-492e-914f-2dd347db06ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube"}}
	{"specversion":"1.0","id":"17db40d4-3445-48e6-9927-0caadeb4bbbf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"db2e40f3-31e5-43d1-ab1c-ea8fcd14f0e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fd3d792e-3060-4300-b85b-1996667295f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-946964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-946964
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-326316 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-326316 --driver=kvm2  --container-runtime=crio: (42.826199813s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-338309 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-338309 --driver=kvm2  --container-runtime=crio: (43.401954948s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-326316
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-338309
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-338309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-338309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-338309: (1.008491284s)
helpers_test.go:175: Cleaning up "first-326316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-326316
--- PASS: TestMinikubeProfile (89.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-338309 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-338309 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.593125449s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-338309 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-338309 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-353260 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-353260 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.210963386s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353260 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353260 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-338309 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353260 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353260 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-353260
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-353260: (1.26845711s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-353260
E0927 18:10:16.999007   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-353260: (22.926844507s)
--- PASS: TestMountStart/serial/RestartStopped (23.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353260 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-353260 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922780 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922780 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m46.271392444s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-922780 -- rollout status deployment/busybox: (4.775476862s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-b4wjc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-ck89r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-b4wjc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-ck89r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-b4wjc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-ck89r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-b4wjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-b4wjc -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-ck89r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-922780 -- exec busybox-7dff88458-ck89r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-922780 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-922780 -v 3 --alsologtostderr: (47.328686464s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-922780 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp testdata/cp-test.txt multinode-922780:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4096433933/001/cp-test_multinode-922780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780:/home/docker/cp-test.txt multinode-922780-m02:/home/docker/cp-test_multinode-922780_multinode-922780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m02 "sudo cat /home/docker/cp-test_multinode-922780_multinode-922780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780:/home/docker/cp-test.txt multinode-922780-m03:/home/docker/cp-test_multinode-922780_multinode-922780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m03 "sudo cat /home/docker/cp-test_multinode-922780_multinode-922780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp testdata/cp-test.txt multinode-922780-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4096433933/001/cp-test_multinode-922780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt multinode-922780:/home/docker/cp-test_multinode-922780-m02_multinode-922780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780 "sudo cat /home/docker/cp-test_multinode-922780-m02_multinode-922780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780-m02:/home/docker/cp-test.txt multinode-922780-m03:/home/docker/cp-test_multinode-922780-m02_multinode-922780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m03 "sudo cat /home/docker/cp-test_multinode-922780-m02_multinode-922780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp testdata/cp-test.txt multinode-922780-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4096433933/001/cp-test_multinode-922780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt multinode-922780:/home/docker/cp-test_multinode-922780-m03_multinode-922780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780 "sudo cat /home/docker/cp-test_multinode-922780-m03_multinode-922780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 cp multinode-922780-m03:/home/docker/cp-test.txt multinode-922780-m02:/home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 ssh -n multinode-922780-m02 "sudo cat /home/docker/cp-test_multinode-922780-m03_multinode-922780-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 node stop m03: (1.388551636s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922780 status: exit status 7 (408.007684ms)

                                                
                                                
-- stdout --
	multinode-922780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-922780-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-922780-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr: exit status 7 (411.368093ms)

                                                
                                                
-- stdout --
	multinode-922780
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-922780-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-922780-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 18:13:25.040919   50075 out.go:345] Setting OutFile to fd 1 ...
	I0927 18:13:25.041182   50075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:13:25.041193   50075 out.go:358] Setting ErrFile to fd 2...
	I0927 18:13:25.041197   50075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 18:13:25.041374   50075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19712-11184/.minikube/bin
	I0927 18:13:25.041536   50075 out.go:352] Setting JSON to false
	I0927 18:13:25.041562   50075 mustload.go:65] Loading cluster: multinode-922780
	I0927 18:13:25.041669   50075 notify.go:220] Checking for updates...
	I0927 18:13:25.041922   50075 config.go:182] Loaded profile config "multinode-922780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0927 18:13:25.041938   50075 status.go:174] checking status of multinode-922780 ...
	I0927 18:13:25.042359   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.042471   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.062664   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0927 18:13:25.063136   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.063756   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.063782   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.064164   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.064402   50075 main.go:141] libmachine: (multinode-922780) Calling .GetState
	I0927 18:13:25.066017   50075 status.go:364] multinode-922780 host status = "Running" (err=<nil>)
	I0927 18:13:25.066034   50075 host.go:66] Checking if "multinode-922780" exists ...
	I0927 18:13:25.066337   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.066369   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.081459   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0927 18:13:25.081904   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.082395   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.082420   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.082735   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.082925   50075 main.go:141] libmachine: (multinode-922780) Calling .GetIP
	I0927 18:13:25.085803   50075 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:13:25.086247   50075 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:13:25.086272   50075 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:13:25.086421   50075 host.go:66] Checking if "multinode-922780" exists ...
	I0927 18:13:25.086744   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.086829   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.101556   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46369
	I0927 18:13:25.102025   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.102475   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.102497   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.102814   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.103013   50075 main.go:141] libmachine: (multinode-922780) Calling .DriverName
	I0927 18:13:25.103177   50075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 18:13:25.103204   50075 main.go:141] libmachine: (multinode-922780) Calling .GetSSHHostname
	I0927 18:13:25.106115   50075 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:13:25.106528   50075 main.go:141] libmachine: (multinode-922780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a6:70", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:10:48 +0000 UTC Type:0 Mac:52:54:00:fc:a6:70 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:multinode-922780 Clientid:01:52:54:00:fc:a6:70}
	I0927 18:13:25.106569   50075 main.go:141] libmachine: (multinode-922780) DBG | domain multinode-922780 has defined IP address 192.168.39.6 and MAC address 52:54:00:fc:a6:70 in network mk-multinode-922780
	I0927 18:13:25.106725   50075 main.go:141] libmachine: (multinode-922780) Calling .GetSSHPort
	I0927 18:13:25.106882   50075 main.go:141] libmachine: (multinode-922780) Calling .GetSSHKeyPath
	I0927 18:13:25.107062   50075 main.go:141] libmachine: (multinode-922780) Calling .GetSSHUsername
	I0927 18:13:25.107190   50075 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780/id_rsa Username:docker}
	I0927 18:13:25.181667   50075 ssh_runner.go:195] Run: systemctl --version
	I0927 18:13:25.187269   50075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:13:25.202525   50075 kubeconfig.go:125] found "multinode-922780" server: "https://192.168.39.6:8443"
	I0927 18:13:25.202569   50075 api_server.go:166] Checking apiserver status ...
	I0927 18:13:25.202619   50075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 18:13:25.216107   50075 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup
	W0927 18:13:25.225547   50075 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0927 18:13:25.225599   50075 ssh_runner.go:195] Run: ls
	I0927 18:13:25.229834   50075 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I0927 18:13:25.233860   50075 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I0927 18:13:25.233884   50075 status.go:456] multinode-922780 apiserver status = Running (err=<nil>)
	I0927 18:13:25.233895   50075 status.go:176] multinode-922780 status: &{Name:multinode-922780 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:13:25.233915   50075 status.go:174] checking status of multinode-922780-m02 ...
	I0927 18:13:25.234289   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.234330   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.249649   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32821
	I0927 18:13:25.250029   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.250438   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.250455   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.250836   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.250999   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .GetState
	I0927 18:13:25.252563   50075 status.go:364] multinode-922780-m02 host status = "Running" (err=<nil>)
	I0927 18:13:25.252580   50075 host.go:66] Checking if "multinode-922780-m02" exists ...
	I0927 18:13:25.252908   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.252942   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.268107   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46883
	I0927 18:13:25.268562   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.269108   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.269124   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.269492   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.269711   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .GetIP
	I0927 18:13:25.272828   50075 main.go:141] libmachine: (multinode-922780-m02) DBG | domain multinode-922780-m02 has defined MAC address 52:54:00:79:fa:c1 in network mk-multinode-922780
	I0927 18:13:25.273202   50075 main.go:141] libmachine: (multinode-922780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:fa:c1", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:11:47 +0000 UTC Type:0 Mac:52:54:00:79:fa:c1 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-922780-m02 Clientid:01:52:54:00:79:fa:c1}
	I0927 18:13:25.273228   50075 main.go:141] libmachine: (multinode-922780-m02) DBG | domain multinode-922780-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:79:fa:c1 in network mk-multinode-922780
	I0927 18:13:25.273353   50075 host.go:66] Checking if "multinode-922780-m02" exists ...
	I0927 18:13:25.273741   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.273787   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.289066   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35967
	I0927 18:13:25.289576   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.290089   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.290109   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.290428   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.290606   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .DriverName
	I0927 18:13:25.290795   50075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0927 18:13:25.290820   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .GetSSHHostname
	I0927 18:13:25.293656   50075 main.go:141] libmachine: (multinode-922780-m02) DBG | domain multinode-922780-m02 has defined MAC address 52:54:00:79:fa:c1 in network mk-multinode-922780
	I0927 18:13:25.294117   50075 main.go:141] libmachine: (multinode-922780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:fa:c1", ip: ""} in network mk-multinode-922780: {Iface:virbr1 ExpiryTime:2024-09-27 19:11:47 +0000 UTC Type:0 Mac:52:54:00:79:fa:c1 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-922780-m02 Clientid:01:52:54:00:79:fa:c1}
	I0927 18:13:25.294143   50075 main.go:141] libmachine: (multinode-922780-m02) DBG | domain multinode-922780-m02 has defined IP address 192.168.39.108 and MAC address 52:54:00:79:fa:c1 in network mk-multinode-922780
	I0927 18:13:25.294302   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .GetSSHPort
	I0927 18:13:25.294491   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .GetSSHKeyPath
	I0927 18:13:25.294628   50075 main.go:141] libmachine: (multinode-922780-m02) Calling .GetSSHUsername
	I0927 18:13:25.294770   50075 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19712-11184/.minikube/machines/multinode-922780-m02/id_rsa Username:docker}
	I0927 18:13:25.377328   50075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0927 18:13:25.391029   50075 status.go:176] multinode-922780-m02 status: &{Name:multinode-922780-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0927 18:13:25.391090   50075 status.go:174] checking status of multinode-922780-m03 ...
	I0927 18:13:25.391496   50075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0927 18:13:25.391548   50075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0927 18:13:25.406961   50075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I0927 18:13:25.407421   50075 main.go:141] libmachine: () Calling .GetVersion
	I0927 18:13:25.407919   50075 main.go:141] libmachine: Using API Version  1
	I0927 18:13:25.407941   50075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0927 18:13:25.408246   50075 main.go:141] libmachine: () Calling .GetMachineName
	I0927 18:13:25.408430   50075 main.go:141] libmachine: (multinode-922780-m03) Calling .GetState
	I0927 18:13:25.409878   50075 status.go:364] multinode-922780-m03 host status = "Stopped" (err=<nil>)
	I0927 18:13:25.409891   50075 status.go:377] host is not running, skipping remaining checks
	I0927 18:13:25.409896   50075 status.go:176] multinode-922780-m03 status: &{Name:multinode-922780-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 node start m03 -v=7 --alsologtostderr: (38.658641095s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-922780 node delete m03: (1.813401979s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (186.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922780 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922780 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m6.111585471s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-922780 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (186.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-922780
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922780-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-922780-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.481631ms)

                                                
                                                
-- stdout --
	* [multinode-922780-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-922780-m02' is duplicated with machine name 'multinode-922780-m02' in profile 'multinode-922780'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-922780-m03 --driver=kvm2  --container-runtime=crio
E0927 18:25:16.998838   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-922780-m03 --driver=kvm2  --container-runtime=crio: (42.075945922s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-922780
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-922780: exit status 80 (207.866225ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-922780 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-922780-m03 already exists in multinode-922780-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-922780-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-922780-m03: (1.01063322s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.40s)

                                                
                                    
x
+
TestScheduledStopUnix (110.96s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-565305 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-565305 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.33954899s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-565305 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-565305 -n scheduled-stop-565305
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-565305 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0927 18:29:39.009106   18368 retry.go:31] will retry after 61.515µs: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.010291   18368 retry.go:31] will retry after 146.481µs: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.011432   18368 retry.go:31] will retry after 209.105µs: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.012574   18368 retry.go:31] will retry after 319.338µs: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.013706   18368 retry.go:31] will retry after 474.003µs: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.014849   18368 retry.go:31] will retry after 1.11858ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.017059   18368 retry.go:31] will retry after 870.878µs: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.018218   18368 retry.go:31] will retry after 2.458607ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.021462   18368 retry.go:31] will retry after 1.8138ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.023723   18368 retry.go:31] will retry after 5.461841ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.029981   18368 retry.go:31] will retry after 5.434139ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.036226   18368 retry.go:31] will retry after 12.238695ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.049595   18368 retry.go:31] will retry after 13.52752ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.063831   18368 retry.go:31] will retry after 22.510358ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
I0927 18:29:39.087090   18368 retry.go:31] will retry after 30.567137ms: open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/scheduled-stop-565305/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-565305 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-565305 -n scheduled-stop-565305
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-565305
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-565305 --schedule 15s
E0927 18:30:16.998938   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-565305
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-565305: exit status 7 (62.251508ms)

                                                
                                                
-- stdout --
	scheduled-stop-565305
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-565305 -n scheduled-stop-565305
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-565305 -n scheduled-stop-565305: exit status 7 (64.783744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-565305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-565305
--- PASS: TestScheduledStopUnix (110.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (203.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4204548476 start -p running-upgrade-158112 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4204548476 start -p running-upgrade-158112 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m43.002486032s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-158112 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-158112 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m37.006106053s)
helpers_test.go:175: Cleaning up "running-upgrade-158112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-158112
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-158112: (1.252007003s)
--- PASS: TestRunningBinaryUpgrade (203.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-634967 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-634967 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (86.51775ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-634967] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19712
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19712-11184/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19712-11184/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (117.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-634967 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-634967 --driver=kvm2  --container-runtime=crio: (1m57.116126076s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-634967 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (117.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-634967 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-634967 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.8760366s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-634967 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-634967 status -o json: exit status 2 (247.127101ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-634967","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-634967
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-634967: (1.072881699s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (52.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-634967 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-634967 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.336711218s)
--- PASS: TestNoKubernetes/serial/Start (52.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-634967 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-634967 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.805987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.071084814s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-634967
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-634967: (1.306258935s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-634967 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-634967 --driver=kvm2  --container-runtime=crio: (24.422887913s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-634967 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-634967 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.459402ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2526164914 start -p stopped-upgrade-904897 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0927 18:35:00.067699   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
E0927 18:35:16.996995   18368 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19712-11184/.minikube/profiles/functional-990577/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2526164914 start -p stopped-upgrade-904897 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (49.256422154s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2526164914 -p stopped-upgrade-904897 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2526164914 -p stopped-upgrade-904897 stop: (2.137086802s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-904897 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-904897 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.120299224s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.51s)

                                                
                                    
x
+
TestPause/serial/Start (85.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-670363 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
I0927 18:36:17.235834   18368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0927 18:36:19.300683   18368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0927 18:36:19.351311   18368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0927 18:36:19.351345   18368 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0927 18:36:19.351418   18368 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0927 18:36:19.351446   18368 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3900550876/002/docker-machine-driver-kvm2
I0927 18:36:19.383005   18368 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3900550876/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640 0x4670640] Decompressors:map[bz2:0xc00076ee20 gz:0xc00076ee28 tar:0xc00076edd0 tar.bz2:0xc00076ede0 tar.gz:0xc00076edf0 tar.xz:0xc00076ee00 tar.zst:0xc00076ee10 tbz2:0xc00076ede0 tgz:0xc00076edf0 txz:0xc00076ee00 tzst:0xc00076ee10 xz:0xc00076ee30 zip:0xc00076ee40 zst:0xc00076ee38] Getters:map[file:0xc000b843c0 http:0xc0007242d0 https:0xc000724320] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0927 18:36:19.383050   18368 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3900550876/002/docker-machine-driver-kvm2
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-670363 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m25.397865628s)
--- PASS: TestPause/serial/Start (85.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-904897
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    

Test skip (32/207)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard